From patchwork Fri Oct 23 16:51:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FBFCC388F9 for ; Fri, 23 Oct 2020 16:50:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16B5124641 for ; Fri, 23 Oct 2020 16:50:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="f0Nn5WiL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751383AbgJWQuL (ORCPT ); Fri, 23 Oct 2020 12:50:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750451AbgJWQuJ (ORCPT ); Fri, 23 Oct 2020 12:50:09 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52D39C0613CE; Fri, 23 Oct 2020 09:50:09 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id o3so1673732pgr.11; Fri, 23 Oct 2020 09:50:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1uXztMV6CzErC+X0CDm8LCTZKP5rxlBSqn7eAJozNsI=; b=f0Nn5WiLyLBtBxH07Z9KBRTJuSLzWCs+jTGy/kU049U9jAp1fbqipW8NChPvmCBk5N lWnSy3bfxYTWE3jsWTRMmCoQ8/U5cTpVg2GCPNjrk25U2NI6dYzLpFx/lOJjBrxKHali f++Qdrl157RCb1AdmylhtXZBcvDOR2Xh2W5DR7iWMlQ6GJJMLl2+NJnFLxKTrQjvd0Y1 schytKj7YWHlSE+cufg0h632swvX1BkmTsJ9RFNoplDrCnN0SUCT9qEeu5yFrlehHshh Cws4ljZt39NK7mYHS0sZ8Q0Ct4kmBFJguGCNSvnqcaf0uuDyc0cq+MJQAIYUhzt8u6bZ 8S5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1uXztMV6CzErC+X0CDm8LCTZKP5rxlBSqn7eAJozNsI=; b=SEm/WD+ppB+1Q7KtGhtmlqsgq61ZAtIPDVFgZ3IwO0YKmzn0TBN5nN+BHVFJlhMVPq IbAy46Rs+rQaZnJSZ1SRJ0vZT9FNta5L7v4eR3FN3dsohv+ibjA9eTckrkXAf7S2Bkq9 syw2mO9WwPeTlfbQAdbA/kafJm/SHsvDjS79R41s3MKmw13K0dMvirbIQsbLbrUaIaku 0uFKBrgw7la8OG6jKAcJGJV+kVGe9u6iXN4Abhq8u8od7y1zyHoFgMytfjIwxNMAv2JE OMz7xnoGDTpt+2v/a/vaOFyLPdIlB/uhYm35NPxlH4WPkQEUv3chwkTZ7JNAF0CcpTu3 rSLw== X-Gm-Message-State: AOAM5327G/VmnaFqj1oexHgRbZgsGnwobfBrHwQmdYuG+Il8tpPJ0mHy jMXIJNSXVRmzPEv2p0e+oqg= X-Google-Smtp-Source: ABdhPJyfs5XJq4nPYCeSegrbmFk0k/QvCb6Z1uDuv3fx+Xwxb1dtd3NqyOCeA+6v6xG1aXlkHid0XQ== X-Received: by 2002:a63:2547:: with SMTP id l68mr2716807pgl.241.1603471808809; Fri, 23 Oct 2020 09:50:08 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id c187sm2922270pfc.153.2020.10.23.09.50.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Jordan Crouse , Eric Anholt , Sam Ravnborg , Emil Velikov , AngeloGioacchino Del Regno , Jonathan Marek , Sharat Masetty , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 01/23] drm/msm: Fix a couple incorrect usages of get_vaddr_active() Date: Fri, 23 Oct 2020 09:51:02 -0700 Message-Id: <20201023165136.561680-2-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The microcode bo's should never be madvise(WONTNEED), so these should not be using msm_gem_get_vaddr_active(). Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index c941c8138f25..2180650a03bc 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -426,7 +426,7 @@ static int a5xx_preempt_start(struct msm_gpu *gpu) static void a5xx_ucode_check_version(struct a5xx_gpu *a5xx_gpu, struct drm_gem_object *obj) { - u32 *buf = msm_gem_get_vaddr_active(obj); + u32 *buf = msm_gem_get_vaddr(obj); if (IS_ERR(buf)) return; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 8915882e4444..16eaaf0804ca 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -522,7 +522,7 @@ static int a6xx_cp_init(struct msm_gpu *gpu) static void a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, struct drm_gem_object *obj) { - u32 *buf = msm_gem_get_vaddr_active(obj); + u32 *buf = msm_gem_get_vaddr(obj); if (IS_ERR(buf)) return; From patchwork Fri Oct 23 16:51:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8C18C55179 for ; Fri, 23 Oct 2020 16:50:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80B082463D for ; Fri, 23 Oct 2020 16:50:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eoytg6G7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752204AbgJWQuS (ORCPT ); Fri, 23 Oct 2020 12:50:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751826AbgJWQuL (ORCPT ); Fri, 23 Oct 2020 12:50:11 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7621C0613CE; Fri, 23 Oct 2020 09:50:11 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id 10so1792609pfp.5; Fri, 23 Oct 2020 09:50:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2hSrbTvOYHbFlkNZGKTzFOZ3nSNbKTzFnB52efiVAGA=; b=eoytg6G7Kh6cHcNo8ZyqxPdwdMnsuo5AM2+CWixXtqE+xDSI00wS2MMqiV2Zi5cgj4 l9DJTI5fePJnxbCJ69VtuZi+cpAswxY4harhSeDKF2HJc7SnXqXqKD2/a7m8oO04omcu y+7SEuo/lrX0Jvn8E5mhhJzV+Z4mWCctZBUF+4aMEQER69wJfG1m7YNN89qO5ZcmxW89 fmsAo97hdYp4/C1jko36u9kgqwgYGiH4zIZDtS+sf+VaTHx/52JJ41JdFH3hkz+bz9Yq PhTZ+NoLWz5vuQkcsld5+alA5fo+GorrblRZihc8r3cfHWz0MIT33MonwxuZiy1JEkiE L1/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2hSrbTvOYHbFlkNZGKTzFOZ3nSNbKTzFnB52efiVAGA=; b=PLzzbpdKGJmUguYaYRY9z5fxvjmB7JsxqaTxpop7qZagOaaLVepRfbROg3AACH3vn2 84RNUURq93wQ2VtFzrLoSaMb2KA18qouGOky6RSgqpRgbcxbawlNyU2uC2hW7BpoP89i 8Pxf0Y8/KMRGFrgK3RpuhBgybwebWg9MwXj8nHY2WyFCZTSrMtZ+/lWb5Kwdb5fOMgr5 aLu9ZeDrwC6N/1iFPltKyYNt06FJMAvhRTNgB2NhTD4pLeORzcnC3ricYhg2KD6mrWH1 oaS0z7HMqftbJ6p7vLDMU7kO7opyQszXIZOyAVxxzlp/toOG4RYy44fFKR2ClzbIZVPb 68dA== X-Gm-Message-State: AOAM530m5FsDSa4g5zsZpZYtJBVCK6Vbb8KBOP4rYKzJnpf1ngXbS8/u e02GktcCmM5zgJg7dao/Gyk= X-Google-Smtp-Source: ABdhPJxJTatWmknYbRcgt4CQAkkdrdIIrpyCqSHkRzvBpv6VseKfTIQxemMW3taqTPWNAFXvalJVzQ== X-Received: by 2002:a17:90a:4e47:: with SMTP id t7mr3376889pjl.26.1603471811260; Fri, 23 Oct 2020 09:50:11 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id i17sm2617175pfa.183.2020.10.23.09.50.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 02/23] drm/msm/gem: Add obj->lock wrappers Date: Fri, 23 Oct 2020 09:51:03 -0700 Message-Id: <20201023165136.561680-3-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This will make it easier to transition over to obj->resv locking for everything that is per-bo locking. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 99 ++++++++++++++++------------------- drivers/gpu/drm/msm/msm_gem.h | 28 ++++++++++ 2 files changed, 74 insertions(+), 53 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 14e14caf90f9..afef9c6b1a1c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -178,15 +178,15 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **p; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } p = get_pages(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return p; } @@ -252,14 +252,14 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) * vm_ops.open/drm_gem_mmap_obj and close get and put * a reference on obj. So, we dont need to hold one here. */ - err = mutex_lock_interruptible(&msm_obj->lock); + err = msm_gem_lock_interruptible(obj); if (err) { ret = VM_FAULT_NOPAGE; goto out; } if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return VM_FAULT_SIGBUS; } @@ -280,7 +280,7 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) ret = vmf_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV)); out_unlock: - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); out: return ret; } @@ -289,10 +289,9 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) static uint64_t mmap_offset(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; - struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); /* Make it mmapable */ ret = drm_gem_create_mmap_offset(obj); @@ -308,11 +307,10 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) { uint64_t offset; - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); offset = mmap_offset(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return offset; } @@ -322,7 +320,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) @@ -341,7 +339,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace == aspace) @@ -360,14 +358,14 @@ static void del_vma(struct msm_gem_vma *vma) kfree(vma); } -/* Called with msm_obj->lock locked */ +/* Called with msm_obj locked */ static void put_iova(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma, *tmp; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->aspace) { @@ -382,11 +380,10 @@ static int msm_gem_get_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; int ret = 0; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); @@ -421,7 +418,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (msm_obj->flags & MSM_BO_MAP_PRIV) prot |= IOMMU_PRIV; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; @@ -446,11 +443,10 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); u64 local; int ret; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); ret = msm_gem_get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -461,7 +457,7 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, if (!ret) *iova = local; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ret; } @@ -479,12 +475,11 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); ret = msm_gem_get_iova_locked(obj, aspace, iova, 0, U64_MAX); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ret; } @@ -495,12 +490,11 @@ int msm_gem_get_iova(struct drm_gem_object *obj, uint64_t msm_gem_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = lookup_vma(obj, aspace); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); WARN_ON(!vma); return vma ? vma->iova : 0; @@ -514,16 +508,15 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = lookup_vma(obj, aspace); if (!WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, @@ -564,20 +557,20 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) if (obj->import_attach) return ERR_PTR(-ENODEV); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); if (WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } /* increment vmap_count *before* vmap() call, so shrinker can - * check vmap_count (is_vunmapable()) outside of msm_obj->lock. + * check vmap_count (is_vunmapable()) outside of msm_obj lock. * This guarantees that we won't try to msm_gem_vunmap() this * same object from within the vmap() call (while we already - * hold msm_obj->lock) + * hold msm_obj lock) */ msm_obj->vmap_count++; @@ -595,12 +588,12 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) } } - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return msm_obj->vaddr; fail: msm_obj->vmap_count--; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(ret); } @@ -624,10 +617,10 @@ void msm_gem_put_vaddr(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); WARN_ON(msm_obj->vmap_count < 1); msm_obj->vmap_count--; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } /* Update madvise status, returns true if not purged, else @@ -637,7 +630,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); @@ -646,7 +639,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) madv = msm_obj->madv; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return (madv != __MSM_MADV_PURGED); } @@ -683,14 +676,14 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } static void msm_gem_vunmap_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); if (!msm_obj->vaddr || WARN_ON(!is_vunmapable(msm_obj))) return; @@ -705,7 +698,7 @@ void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass) mutex_lock_nested(&msm_obj->lock, subclass); msm_gem_vunmap_locked(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } /* must be called before _move_to_active().. */ @@ -816,7 +809,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); switch (msm_obj->madv) { case __MSM_MADV_PURGED: @@ -884,7 +877,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) describe_fence(fence, "Exclusive", m); rcu_read_unlock(); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) @@ -929,7 +922,7 @@ static void free_object(struct msm_gem_object *msm_obj) list_del(&msm_obj->mm_list); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); put_iova(obj); @@ -950,7 +943,7 @@ static void free_object(struct msm_gem_object *msm_obj) drm_gem_object_release(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); kfree(msm_obj); } @@ -1070,10 +1063,10 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, struct msm_gem_vma *vma; struct page **pages; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = add_vma(obj, NULL); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); if (IS_ERR(vma)) { ret = PTR_ERR(vma); goto fail; @@ -1157,22 +1150,22 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, npages = size / PAGE_SIZE; msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); msm_obj->sgt = sgt; msm_obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); if (!msm_obj->pages) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); ret = -ENOMEM; goto fail; } ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages); if (ret) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); goto fail; } - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); mutex_lock(&dev->struct_mutex); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a1bf741b9b89..f6482154e8bb 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,6 +93,34 @@ struct msm_gem_object { }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +static inline void +msm_gem_lock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_lock(&msm_obj->lock); +} + +static inline int +msm_gem_lock_interruptible(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_lock_interruptible(&msm_obj->lock); +} + +static inline void +msm_gem_unlock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_unlock(&msm_obj->lock); +} + +static inline bool +msm_gem_is_locked(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_is_locked(&msm_obj->lock); +} + static inline bool is_active(struct msm_gem_object *msm_obj) { return atomic_read(&msm_obj->active_count); From patchwork Fri Oct 23 16:51:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68C77C388F9 for ; Fri, 23 Oct 2020 16:50:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B881223EA for ; Fri, 23 Oct 2020 16:50:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nGWhOnAW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752175AbgJWQuR (ORCPT ); Fri, 23 Oct 2020 12:50:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750451AbgJWQuO (ORCPT ); Fri, 23 Oct 2020 12:50:14 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0E9DC0613CE; Fri, 23 Oct 2020 09:50:13 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id e10so1801681pfj.1; Fri, 23 Oct 2020 09:50:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f5Lma5ixgLKqUWZE9zm+UsGyT5P2TCnzVnH8OP0YJLg=; b=nGWhOnAWdyKZ1mPvxWN1s86eaNOoGrxqk62ak4St0TLgzGl8Wa+qHX9Y9HQlFOsQR5 vPLd660eG0FqLwXMxIbgjdNFHrW7mtekhjIZ/+46ZImvuZanHeaCV/V5nDV5j5Jwt9to F/Eq32YB0H0L7fjqVlO74u9b4DVONTnxPyfx1+KQtNn4BkpAn35IuhouKrk7ef0iHtXn IQIjYFGHJ9cAHjIJhUiG22orXiT3tBdVRPaAfTZ2ZZhvZAc3XqNDQl9abiIaAEwCZBJ8 H+wmok9WObAfQf05zqHVKJE7XZh5sipFsdaxtwOT6+iDJmQ0IssuH1mlqeJO2Qotp6hK HekQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f5Lma5ixgLKqUWZE9zm+UsGyT5P2TCnzVnH8OP0YJLg=; b=Whem5g3JeqZsdj520HPYK+rx8pYgMCM8Dx/ABs6Cz4B6Bx9aALeOMM7cJ8iXJe1rCv 8Jc7ie79eISzPgtgJxN5P9KvGhKIijwuzxzU7eVXoB3obDuCCO8XOBEyi8LQt3g6JbOM EqvhfTM9/qiadUXCNk7KlLeMR+zIj6AYYgHpZ/IiKvtHnv90nIE/9pr1V7Wl96cfAkmp HUXosX8sNq5ygeDyJ+Sic/lhavs6/SrZ47RSKgfuvIdOd6psrks9hyHwD3hF7By2HIIb 0vHtb0dwMOHEatkQQapVaDGIGztWqylhk6Ex7c/48f3oHtNMWtdP3EhNJUxWIh2ds8cY eCcg== X-Gm-Message-State: AOAM5326MjB4qJPevl4Z4q/mPcFibmdcfjACYuchDKKQEURc/1dMfVrG e5FaVkPo9Rs3n6Zx2NxNs6w= X-Google-Smtp-Source: ABdhPJybEnZDk7Gh/5Cx54oRWuF8wWJTQoFNA6UmDDJPZI5aZB58FyTs7JHRWSS69yYEyloelPzksg== X-Received: by 2002:a17:90a:aa90:: with SMTP id l16mr3507633pjq.0.1603471813469; Fri, 23 Oct 2020 09:50:13 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id y65sm2610923pfy.57.2020.10.23.09.50.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:12 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 03/23] drm/msm/gem: Rename internal get_iova_locked helper Date: Fri, 23 Oct 2020 09:51:04 -0700 Message-Id: <20201023165136.561680-4-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We'll need to introduce a _locked() version of msm_gem_get_iova(), so we need to make that name available. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index afef9c6b1a1c..dec89fe79025 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -376,7 +376,7 @@ put_iova(struct drm_gem_object *obj) } } -static int msm_gem_get_iova_locked(struct drm_gem_object *obj, +static int get_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { @@ -448,7 +448,7 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, msm_gem_lock(obj); - ret = msm_gem_get_iova_locked(obj, aspace, &local, + ret = get_iova_locked(obj, aspace, &local, range_start, range_end); if (!ret) @@ -478,7 +478,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int ret; msm_gem_lock(obj); - ret = msm_gem_get_iova_locked(obj, aspace, iova, 0, U64_MAX); + ret = get_iova_locked(obj, aspace, iova, 0, U64_MAX); msm_gem_unlock(obj); return ret; From patchwork Fri Oct 23 16:51:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 467D4C4363A for ; Fri, 23 Oct 2020 16:52:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C63A0241A3 for ; Fri, 23 Oct 2020 16:52:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kVD8LMEM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752725AbgJWQuV (ORCPT ); Fri, 23 Oct 2020 12:50:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752722AbgJWQuT (ORCPT ); Fri, 23 Oct 2020 12:50:19 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5395AC0613CE; Fri, 23 Oct 2020 09:50:19 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id x13so1689399pgp.7; Fri, 23 Oct 2020 09:50:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9Wh4Tpb4E3T4aFd9Kh6fmewic7oU3pwniyHoMkAYkA8=; b=kVD8LMEMRzjn2qtF/+k1SrbXBzibfwnWhnxuYAP0ISYC00oaMpiy+A73inF3xAFxVq I0EDjCEGvW+pB+QmE6cNgPbgV4FoKOURVE3yzG/de6Qi3FVNN/3jzC/4MVPsJ3HlOSWO nZ/1l8vunxCYT0ajt9VadT69WxrLsXdw7hp7aJQBtv0aGDYvqH6eLjQumTJMTuI0l6kY yD80FP6L8a/c4mbeh2shxagBKD41yMHzP8+fgPH6eL/GCY+yp/BuLQPPVGy5Atf1tXYN 9TRa/YrCpfb71cfMxPmVX0RkRfvjM0/N5qb13EVzOqhUbpJucO7zFSOhX9GtGg/B0fto NTSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9Wh4Tpb4E3T4aFd9Kh6fmewic7oU3pwniyHoMkAYkA8=; b=cTObOJVOn2vNvqu0BzbuEMSwwQrIU0uEeAww1sTBG6+pMZH1Y+BEwvXR3qNT58zFwl SpUcpfZBJ/XgoHf0sLPLaoL5fcVv5IBuGSk0cbIKLVvMWKOXUkschopgyqLuNKk5SScz Mc0k0Hi73kSKMXZnoSGBZLAge9AZqgI2qRhnq30dM7/7S9RWw3xMq2nGdDlGCvqEW03U 0uifdbPqUKxSipZbptOTXO0dYp7wCB4IMGFwGjU5QYXvlgZY65ZjvYrYvF5rppkUigrx 9LoihTGRPUJApwUwiwgwKwpbvNqc20ER1ZXkfSQSrilRRVIwbXTz2zWKDsdD6fctYnM6 ohpQ== X-Gm-Message-State: AOAM5323iVvvM7aNhk+Jg3cof5yzYmVr0dkgIzteTQLFrfjt1dmUayZn ZZVJ4cpPUzVLmQROOXMMyEw= X-Google-Smtp-Source: ABdhPJzibyyG7lF0omhwXSlr32CTBFvl7nuhRfjcedbVNMzKEIboEkSoXaR/Aa6kDXoHI/5ArcaCPg== X-Received: by 2002:a63:4d18:: with SMTP id a24mr2632363pgb.414.1603471818752; Fri, 23 Oct 2020 09:50:18 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id v186sm2622032pfv.135.2020.10.23.09.50.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6ni?= =?utf-8?q?g?= , Thomas Zimmermann , Emil Velikov , Sam Ravnborg , Maxime Ripard , Liviu Dudau , Brian Masney , Christophe JAILLET , Harigovindan P , Jordan Crouse , Rajendra Nayak , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH v4 04/23] drm/msm/gem: Move prototypes to msm_gem.h Date: Fri, 23 Oct 2020 09:51:05 -0700 Message-Id: <20201023165136.561680-5-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 1 + drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 1 + drivers/gpu/drm/msm/dsi/dsi_host.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 54 ---------------------- drivers/gpu/drm/msm/msm_fbdev.c | 1 + drivers/gpu/drm/msm/msm_gem.h | 56 +++++++++++++++++++++++ 6 files changed, 60 insertions(+), 54 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index a0253297bc76..b65b2329cc8d 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -11,6 +11,7 @@ #include #include "mdp4_kms.h" +#include "msm_gem.h" struct mdp4_crtc { struct drm_crtc base; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index c39dad151bb6..81fbd52ad7e7 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -15,6 +15,7 @@ #include #include "mdp5_kms.h" +#include "msm_gem.h" #define CURSOR_WIDTH 64 #define CURSOR_HEIGHT 64 diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index b17ac6c27554..5e7cdc11c764 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -26,6 +26,7 @@ #include "sfpb.xml.h" #include "dsi_cfg.h" #include "msm_kms.h" +#include "msm_gem.h" #define DSI_RESET_TOGGLE_DELAY_MS 20 diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b9dd8f8f4887..79ee7d05b363 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -273,28 +273,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, void msm_gem_shrinker_init(struct drm_device *dev); void msm_gem_shrinker_cleanup(struct drm_device *dev); -int msm_gem_mmap_obj(struct drm_gem_object *obj, - struct vm_area_struct *vma); -int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); -vm_fault_t msm_gem_fault(struct vm_fault *vmf); -uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); -int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); -uint64_t msm_gem_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); -struct page **msm_gem_get_pages(struct drm_gem_object *obj); -void msm_gem_put_pages(struct drm_gem_object *obj); -int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, - struct drm_mode_create_dumb *args); -int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, - uint32_t handle, uint64_t *offset); struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); void *msm_gem_prime_vmap(struct drm_gem_object *obj); void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); @@ -303,38 +281,8 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -void *msm_gem_get_vaddr(struct drm_gem_object *obj); -void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); -void msm_gem_put_vaddr(struct drm_gem_object *obj); -int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); -int msm_gem_sync_object(struct drm_gem_object *obj, - struct msm_fence_context *fctx, bool exclusive); -void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); -void msm_gem_active_put(struct drm_gem_object *obj); -int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); -int msm_gem_cpu_fini(struct drm_gem_object *obj); -void msm_gem_free_object(struct drm_gem_object *obj); -int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, - uint32_t size, uint32_t flags, uint32_t *handle, char *name); -struct drm_gem_object *msm_gem_new(struct drm_device *dev, - uint32_t size, uint32_t flags); -struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev, - uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, - struct drm_gem_object **bo, uint64_t *iova); -void *msm_gem_kernel_new_locked(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace, bool locked); -struct drm_gem_object *msm_gem_import(struct drm_device *dev, - struct dma_buf *dmabuf, struct sg_table *sgt); void msm_gem_free_work(struct work_struct *work); -__printf(2, 3) -void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); - int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct msm_gem_address_space *aspace); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, @@ -447,8 +395,6 @@ void __init msm_dpu_register(void); void __exit msm_dpu_unregister(void); #ifdef CONFIG_DEBUG_FS -void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); -void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); int msm_debugfs_late_init(struct drm_device *dev); int msm_rd_debugfs_init(struct drm_minor *minor); diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index 47235f8c5922..678dba1725a6 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -9,6 +9,7 @@ #include #include "msm_drv.h" +#include "msm_gem.h" #include "msm_kms.h" extern int msm_gem_mmap_obj(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f6482154e8bb..fbad08badf43 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,6 +93,62 @@ struct msm_gem_object { }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +int msm_gem_mmap_obj(struct drm_gem_object *obj, + struct vm_area_struct *vma); +int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); +vm_fault_t msm_gem_fault(struct vm_fault *vmf); +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_get_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); +int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); +uint64_t msm_gem_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); +void msm_gem_unpin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); +struct page **msm_gem_get_pages(struct drm_gem_object *obj); +void msm_gem_put_pages(struct drm_gem_object *obj); +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args); +int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, + uint32_t handle, uint64_t *offset); +void *msm_gem_get_vaddr(struct drm_gem_object *obj); +void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); +void msm_gem_put_vaddr(struct drm_gem_object *obj); +int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); +int msm_gem_sync_object(struct drm_gem_object *obj, + struct msm_fence_context *fctx, bool exclusive); +void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); +void msm_gem_active_put(struct drm_gem_object *obj); +int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); +int msm_gem_cpu_fini(struct drm_gem_object *obj); +void msm_gem_free_object(struct drm_gem_object *obj); +int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, + uint32_t size, uint32_t flags, uint32_t *handle, char *name); +struct drm_gem_object *msm_gem_new(struct drm_device *dev, + uint32_t size, uint32_t flags); +struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev, + uint32_t size, uint32_t flags); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, + uint32_t flags, struct msm_gem_address_space *aspace, + struct drm_gem_object **bo, uint64_t *iova); +void *msm_gem_kernel_new_locked(struct drm_device *dev, uint32_t size, + uint32_t flags, struct msm_gem_address_space *aspace, + struct drm_gem_object **bo, uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, + struct msm_gem_address_space *aspace, bool locked); +struct drm_gem_object *msm_gem_import(struct drm_device *dev, + struct dma_buf *dmabuf, struct sg_table *sgt); +__printf(2, 3) +void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); +#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); +#endif + static inline void msm_gem_lock(struct drm_gem_object *obj) { From patchwork Fri Oct 23 16:51:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26248C5517A for ; Fri, 23 Oct 2020 16:50:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB8C3223EA for ; Fri, 23 Oct 2020 16:50:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fzzN6gcP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752735AbgJWQuY (ORCPT ); Fri, 23 Oct 2020 12:50:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752728AbgJWQuV (ORCPT ); Fri, 23 Oct 2020 12:50:21 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D60FAC0613CE; Fri, 23 Oct 2020 09:50:21 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id f19so1770578pfj.11; Fri, 23 Oct 2020 09:50:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LQsd8hTmdHRzfZ5R+TDzPM5mnO3yOLJdR6nT0gp28Sg=; b=fzzN6gcPILZEBwjxOYq7s69EeTbni/0MSPbPWnfvls29qMtx/Lnd2NmpMAxbPbx/tF Ob61dDQ4wEyqwAyKj7V0IXPm+/EdSK+uOMjibgnfTKySSAh4nUW3I+RhhJWzwoNepoLY sciXQpSowo+iWYkJJsJeRKeR+F7hKFHFlnx3bNpKCrGHM8w2rR2/k34bjOw7hdGTeji3 Y5BxENPy4o+n0KgBZHS3zmCZOQFC7MBdt6+MqE2veOfm3Ktr5HcnuJwwdUR+kcXZW9Xr TQ1IC4mh/4jufuwqhu4TtWen8hI5EQJERyPfI1JyGh9Ks3A88lnUadfXRPOENzghmNu5 nlrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LQsd8hTmdHRzfZ5R+TDzPM5mnO3yOLJdR6nT0gp28Sg=; b=FbFr2xpMqXM2b2r/KQRLTlkEZeiV8kHHClyoP3+DSMviUOqT/fkVPHoIxacR/nrac2 uoakz256mEG55VjtZDnpIm7OU8kKY0HekbZJdSdy9Rrhi5290T5/EeVkqnLZ5kVbW6mj Q4yEHsPr5xyOYnYkcpmhOCLHKUhzvHVmrKHPVJMikWu9UoUel38/+tyXE3+EHtFwXvih 0Ukzf+qvAOR2GCu0Q74Fjibc9ucQMnk7kNrzcyLbO9F3T7F7IMxJf0TU/8E38qdocvJC I5mjjACWci+hpb139eWJBUj+u42OVWniwtAJ7fbf04yDOPlLpcT1aR+IaTqNbf7J3sc9 n1rg== X-Gm-Message-State: AOAM5311JaUlvsFZDNrwoziq3j6d2Ws2XSX9JAAb0dNUd9lABlEeFV8P 3avIjswf/u2KhtHR1JKjfRA= X-Google-Smtp-Source: ABdhPJw30Dv1hjuiRzG11GMPDznX7yCamst62qVCxMvq2UhnlEWCjOaIDV1PZqfgJ/8m8+hrf1wUlg== X-Received: by 2002:a65:478a:: with SMTP id e10mr2865562pgs.365.1603471821327; Fri, 23 Oct 2020 09:50:21 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id x29sm2717526pfp.152.2020.10.23.09.50.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 05/23] drm/msm/gem: Add some _locked() helpers Date: Fri, 23 Oct 2020 09:51:06 -0700 Message-Id: <20201023165136.561680-6-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark When we cut-over to using dma_resv_lock/etc instead of msm_obj->lock, we'll need these for the submit path (where resv->lock is already held). Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 89 +++++++++++++++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 6 +++ 2 files changed, 75 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index dec89fe79025..e0d8d739b068 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -435,18 +435,14 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, msm_obj->sgt, obj->size >> PAGE_SHIFT); } -/* - * get iova and pin it. Should have a matching put - * limits iova to specified range (in pages) - */ -int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, +static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { u64 local; int ret; - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); ret = get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -457,10 +453,32 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, if (!ret) *iova = local; + return ret; +} + +/* + * get iova and pin it. Should have a matching put + * limits iova to specified range (in pages) + */ +int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova, + u64 range_start, u64 range_end) +{ + int ret; + + msm_gem_lock(obj); + ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); msm_gem_unlock(obj); + return ret; } +int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova) +{ + return get_and_pin_iova_range_locked(obj, aspace, iova, 0, U64_MAX); +} + /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) @@ -501,21 +519,31 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, } /* - * Unpin a iova by updating the reference counts. The memory isn't actually - * purged until something else (shrinker, mm_notifier, destroy, etc) decides - * to get rid of it + * Locked variant of msm_gem_unpin_iova() */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, +void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { struct msm_gem_vma *vma; - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); + vma = lookup_vma(obj, aspace); if (!WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); +} +/* + * Unpin a iova by updating the reference counts. The memory isn't actually + * purged until something else (shrinker, mm_notifier, destroy, etc) decides + * to get rid of it + */ +void msm_gem_unpin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace) +{ + msm_gem_lock(obj); + msm_gem_unpin_iova_locked(obj, aspace); msm_gem_unlock(obj); } @@ -554,15 +582,14 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0; + WARN_ON(!msm_gem_is_locked(obj)); + if (obj->import_attach) return ERR_PTR(-ENODEV); - msm_gem_lock(obj); - if (WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); - msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } @@ -588,20 +615,29 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) } } - msm_gem_unlock(obj); return msm_obj->vaddr; fail: msm_obj->vmap_count--; - msm_gem_unlock(obj); return ERR_PTR(ret); } -void *msm_gem_get_vaddr(struct drm_gem_object *obj) +void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj) { return get_vaddr(obj, MSM_MADV_WILLNEED); } +void *msm_gem_get_vaddr(struct drm_gem_object *obj) +{ + void *ret; + + msm_gem_lock(obj); + ret = msm_gem_get_vaddr_locked(obj); + msm_gem_unlock(obj); + + return ret; +} + /* * Don't use this! It is for the very special case of dumping * submits from GPU hangs or faults, were the bo may already @@ -610,16 +646,29 @@ void *msm_gem_get_vaddr(struct drm_gem_object *obj) */ void *msm_gem_get_vaddr_active(struct drm_gem_object *obj) { - return get_vaddr(obj, __MSM_MADV_PURGED); + void *ret; + + msm_gem_lock(obj); + ret = get_vaddr(obj, __MSM_MADV_PURGED); + msm_gem_unlock(obj); + + return ret; } -void msm_gem_put_vaddr(struct drm_gem_object *obj) +void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(msm_obj->vmap_count < 1); + msm_obj->vmap_count--; +} + +void msm_gem_put_vaddr(struct drm_gem_object *obj) +{ + msm_gem_lock(obj); + msm_gem_put_vaddr_locked(obj); msm_gem_unlock(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index fbad08badf43..d55d5401a2d2 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -103,10 +103,14 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); uint64_t msm_gem_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); +void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); struct page **msm_gem_get_pages(struct drm_gem_object *obj); @@ -115,8 +119,10 @@ int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args); int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, uint32_t handle, uint64_t *offset); +void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj); void *msm_gem_get_vaddr(struct drm_gem_object *obj); void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); +void msm_gem_put_vaddr_locked(struct drm_gem_object *obj); void msm_gem_put_vaddr(struct drm_gem_object *obj); int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); int msm_gem_sync_object(struct drm_gem_object *obj, From patchwork Fri Oct 23 16:51:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5EECC55179 for ; Fri, 23 Oct 2020 16:50:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 685A824640 for ; Fri, 23 Oct 2020 16:50:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hsLeGd2H" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752741AbgJWQu2 (ORCPT ); Fri, 23 Oct 2020 12:50:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752740AbgJWQu0 (ORCPT ); Fri, 23 Oct 2020 12:50:26 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F005C0613CE; Fri, 23 Oct 2020 09:50:24 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id i26so1354166pgl.5; Fri, 23 Oct 2020 09:50:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c/inxxNsLP/RHi0jt3kQtzdiSVyG/oJF9qir+FDcseM=; b=hsLeGd2HKA+sBwT2cYk7BvOcjycSvn8fR61BzoJygXTnwvt71pz3TY6gZvRNnnn5AS fHHiCOKREu9MdDjV17Ki2yytelTfxQav6EIq/LVZ102uJBQsVkAj9HMvfXzrACaZ6Z7t w/6azeOYGI9JxeipELpL/BcVs6R+wc1VpFNeHPUtOg9bmRdV8qT461rDdgHjVbp9EuYI WAhPU+jfYlWW9BktADmFqFJnt2DNLpr7Bn/Ghd0JlczsdoOjLBg6HuTeFuYJ6BMarUYq 6UMkMeV0FbY9Hp000J5/st8T0JqPO0ein1EAm8hpY1NjORALt2vhIpgXuArEimUECkPb ziEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c/inxxNsLP/RHi0jt3kQtzdiSVyG/oJF9qir+FDcseM=; b=J8nHnFbAArXj08AUTVZ2uTNhal4wkNORgq6/2rYWWx8+vc9cQRxo9+XCZPJEVmrKJH fk3WjUF2TQh5Dx2KEZ+iyfpSxe+av+B9xYoocwcGNJ5RHXaG/a/XOyzhqv/LFtLZbf8u lFGXS9C9ERjs4DNam2fhf4zHZoNyo/KDnyBSO10uzEkZE5oUjnxdjuU9sdQYBKmE4q7L q/fN/MHKMd0ULBrvhm6nrbbDp9pAd2x1qiUbfdzebV4ye5Y4Obl8ndFaRiBiIpwl6f+m G8v0VFZIWdIxuRy6S0LuWtWaUW8R7RM+10QjLJqFbQawPVN9qe3hG6K4wxiRFJKUny+s ZM/g== X-Gm-Message-State: AOAM530N5nGSEZU1nHyq796p65T1xkut4zlKYEVRUa5lMgZlH+/wxROu TrSDqN2rHSpkbj5eUUpSsig= X-Google-Smtp-Source: ABdhPJyIECV5JuG+wdj7l/Sb80evcNTjX18GWDbsijKN9hIlfFmXvSbSfDleBN+DjcRVaUBlsHzZ4A== X-Received: by 2002:a62:5f47:0:b029:15c:e72a:4d32 with SMTP id t68-20020a625f470000b029015ce72a4d32mr53923pfb.67.1603471824107; Fri, 23 Oct 2020 09:50:24 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id ge6sm3062454pjb.29.2020.10.23.09.50.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 06/23] drm/msm/gem: Move locking in shrinker path Date: Fri, 23 Oct 2020 09:51:07 -0700 Message-Id: <20201023165136.561680-7-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to skip over bo's that are already locked. This gets rid of the nested lock classes. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 24 +++++---------------- drivers/gpu/drm/msm/msm_gem.h | 29 ++++++++++---------------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 27 +++++++++++++++++------- 3 files changed, 35 insertions(+), 45 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e0d8d739b068..1195847714ba 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,8 +17,6 @@ #include "msm_gpu.h" #include "msm_mmu.h" -static void msm_gem_vunmap_locked(struct drm_gem_object *obj); - static dma_addr_t physaddr(struct drm_gem_object *obj) { @@ -693,20 +691,19 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) return (madv != __MSM_MADV_PURGED); } -void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) +void msm_gem_purge(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); - mutex_lock_nested(&msm_obj->lock, subclass); - put_iova(obj); - msm_gem_vunmap_locked(obj); + msm_gem_vunmap(obj); put_pages(obj); @@ -724,11 +721,9 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); - - msm_gem_unlock(obj); } -static void msm_gem_vunmap_locked(struct drm_gem_object *obj) +void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -741,15 +736,6 @@ static void msm_gem_vunmap_locked(struct drm_gem_object *obj) msm_obj->vaddr = NULL; } -void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - - mutex_lock_nested(&msm_obj->lock, subclass); - msm_gem_vunmap_locked(obj); - msm_gem_unlock(obj); -} - /* must be called before _move_to_active().. */ int msm_gem_sync_object(struct drm_gem_object *obj, struct msm_fence_context *fctx, bool exclusive) @@ -986,7 +972,7 @@ static void free_object(struct msm_gem_object *msm_obj) drm_prime_gem_destroy(obj, msm_obj->sgt); } else { - msm_gem_vunmap_locked(obj); + msm_gem_vunmap(obj); put_pages(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d55d5401a2d2..766ce278c74a 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -162,6 +162,13 @@ msm_gem_lock(struct drm_gem_object *obj) mutex_lock(&msm_obj->lock); } +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_trylock(&msm_obj->lock) == 1; +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { @@ -190,6 +197,7 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { + WARN_ON(!msm_gem_is_locked(&msm_obj->base)); WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; @@ -197,27 +205,12 @@ static inline bool is_purgeable(struct msm_gem_object *msm_obj) static inline bool is_vunmapable(struct msm_gem_object *msm_obj) { + WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return (msm_obj->vmap_count == 0) && msm_obj->vaddr; } -/* The shrinker can be triggered while we hold objA->lock, and need - * to grab objB->lock to purge it. Lockdep just sees these as a single - * class of lock, so we use subclasses to teach it the difference. - * - * OBJ_LOCK_NORMAL is implicit (ie. normal mutex_lock() call), and - * OBJ_LOCK_SHRINKER is used by shrinker. - * - * It is *essential* that we never go down paths that could trigger the - * shrinker for a purgable object. This is ensured by checking that - * msm_obj->madv == MSM_MADV_WILLNEED. - */ -enum msm_gem_lock { - OBJ_LOCK_NORMAL, - OBJ_LOCK_SHRINKER, -}; - -void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass); -void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass); +void msm_gem_purge(struct drm_gem_object *obj); +void msm_gem_vunmap(struct drm_gem_object *obj); void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 482576d7a39a..2dc0ffa925b4 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -52,8 +52,11 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return 0; list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_purgeable(msm_obj)) count += msm_obj->base.size >> PAGE_SHIFT; + msm_gem_unlock(&msm_obj->base); } if (unlock) @@ -78,10 +81,13 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_purgeable(msm_obj)) { - msm_gem_purge(&msm_obj->base, OBJ_LOCK_SHRINKER); + msm_gem_purge(&msm_obj->base); freed += msm_obj->base.size >> PAGE_SHIFT; } + msm_gem_unlock(&msm_obj->base); } if (unlock) @@ -107,15 +113,20 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) return NOTIFY_DONE; list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_vunmapable(msm_obj)) { - msm_gem_vunmap(&msm_obj->base, OBJ_LOCK_SHRINKER); - /* since we don't know any better, lets bail after a few - * and if necessary the shrinker will be invoked again. - * Seems better than unmapping *everything* - */ - if (++unmapped >= 15) - break; + msm_gem_vunmap(&msm_obj->base); + unmapped++; } + msm_gem_unlock(&msm_obj->base); + + /* since we don't know any better, lets bail after a few + * and if necessary the shrinker will be invoked again. + * Seems better than unmapping *everything* + */ + if (++unmapped >= 15) + break; } if (unlock) From patchwork Fri Oct 23 16:51:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D605C55179 for ; Fri, 23 Oct 2020 16:52:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 314DC241A3 for ; Fri, 23 Oct 2020 16:52:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="H3rot9Tj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752922AbgJWQwS (ORCPT ); Fri, 23 Oct 2020 12:52:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751826AbgJWQu1 (ORCPT ); Fri, 23 Oct 2020 12:50:27 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EF9EC0613CE; Fri, 23 Oct 2020 09:50:27 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id e7so1769203pfn.12; Fri, 23 Oct 2020 09:50:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jZu6hiAVgN2XwJXAlI42prVaoZpWSXPlbrarieNv0bY=; b=H3rot9Tjm3xnPq+iJ3qKb1IeEgZ0WgD08lHDxcJ/1YKPNcU8MmLQnB/UClAJJBaLfq gwYKDcOAA8E9dU0z98ca6Xpo0OFOKa+Rzx1jBOiI53IDhL6K2EpPHaObA5r9iY1Xu7WB Zq15jyv/RcbtpOgWbxWeP2kERL1mZvrBUaCobnRJ+e8XnaE6542EJ4D8UT94jFBzc4Ed aROR6iPyHo4+m9L5NlIP/VbSU86n73cSZ1QfycG206jeu7zswiBU5jyAR5+tQsNVHvlg oKgQLhvBQO4KdReVtNw7kef2tUp1vqFjk0smxOuMK0vWDslrvmQqdnWy7AoAVGk74i3v cTMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jZu6hiAVgN2XwJXAlI42prVaoZpWSXPlbrarieNv0bY=; b=UszetDNXqNk2caXONonPB5yNK34MkIwEFkJMtmuNOrljOZpO5wJFNgFUFAFFAgHm2k MQ8uXh5eqbB936AX1ftqmeq+BkbGrv47f2JdeaMh0rgXhJ7QyGraQalT8No1I8W/RmR2 4zGTHNnTqApwUXJMySLU8nPAyiV67SQFz+JXp12reOnu9FabcEMVYpelaCHkewT6/2mJ OTcrKrzFhnzgHgiw+wvxSu4u4owAOxlv517UsZLP+7lncg6FMbDW8mwRCUgqKLhoyq1A 4gIMX8Y8lXd1oKCq7CoANFMKqubemQyOShvjsujlgIQxE+IKesk/Z8S7/Pz0J4lp5FHp L+8w== X-Gm-Message-State: AOAM531tvnsUb5xwbH0u+Xt9PS+z1OZNXE6LbMwjv2vKqLHEmn40S+YG zWcj5h6YmgpzfWN7rlEpOKI= X-Google-Smtp-Source: ABdhPJwYLHql4P24xJwqIfFlVgCgMc3pr1awi+AXFM6XHohN9oAVKLfXvjEPi8MFhANN5IU9HWg2AA== X-Received: by 2002:a63:845:: with SMTP id 66mr2718631pgi.318.1603471826626; Fri, 23 Oct 2020 09:50:26 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t10sm3012002pjr.37.2020.10.23.09.50.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:25 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 07/23] drm/msm/submit: Move copy_from_user ahead of locking bos Date: Fri, 23 Oct 2020 09:51:08 -0700 Message-Id: <20201023165136.561680-8-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We cannot switch to using obj->resv for locking without first moving all the copy_from_user() ahead of submit_lock_objects(). Otherwise in the mm fault path we aquire mm->mmap_sem before obj lock, but in the submit path the order is reversed. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.h | 3 + drivers/gpu/drm/msm/msm_gem_submit.c | 127 +++++++++++++++++---------- 2 files changed, 82 insertions(+), 48 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 766ce278c74a..1f4e5d871a23 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -240,7 +240,10 @@ struct msm_gem_submit { uint32_t type; uint32_t size; /* in dwords */ uint64_t iova; + uint32_t offset;/* in dwords */ uint32_t idx; /* cmdstream buffer idx in bos[] */ + uint32_t nr_relocs; + struct drm_msm_gem_submit_reloc *relocs; } *cmd; /* array of size nr_cmds */ struct { uint32_t flags; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index aa5c60a7132d..b6c258c89290 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -62,11 +62,16 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, void msm_gem_submit_free(struct msm_gem_submit *submit) { + unsigned i; + dma_fence_put(submit->fence); list_del(&submit->node); put_pid(submit->pid); msm_submitqueue_put(submit->queue); + for (i = 0; i < submit->nr_cmds; i++) + kfree(submit->cmd[i].relocs); + kfree(submit); } @@ -150,6 +155,66 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, return ret; } +static int submit_lookup_cmds(struct msm_gem_submit *submit, + struct drm_msm_gem_submit *args, struct drm_file *file) +{ + unsigned i, sz; + int ret = 0; + + for (i = 0; i < args->nr_cmds; i++) { + struct drm_msm_gem_submit_cmd submit_cmd; + void __user *userptr = + u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); + + ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); + if (ret) { + ret = -EFAULT; + goto out; + } + + /* validate input from userspace: */ + switch (submit_cmd.type) { + case MSM_SUBMIT_CMD_BUF: + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + break; + default: + DRM_ERROR("invalid type: %08x\n", submit_cmd.type); + return -EINVAL; + } + + if (submit_cmd.size % 4) { + DRM_ERROR("non-aligned cmdstream buffer size: %u\n", + submit_cmd.size); + ret = -EINVAL; + goto out; + } + + submit->cmd[i].type = submit_cmd.type; + submit->cmd[i].size = submit_cmd.size / 4; + submit->cmd[i].offset = submit_cmd.submit_offset / 4; + submit->cmd[i].idx = submit_cmd.submit_idx; + submit->cmd[i].nr_relocs = submit_cmd.nr_relocs; + + sz = array_size(submit_cmd.nr_relocs, + sizeof(struct drm_msm_gem_submit_reloc)); + /* check for overflow: */ + if (sz == SIZE_MAX) { + ret = -ENOMEM; + goto out; + } + submit->cmd[i].relocs = kmalloc(sz, GFP_KERNEL); + ret = copy_from_user(submit->cmd[i].relocs, userptr, sz); + if (ret) { + ret = -EFAULT; + goto out; + } + } + +out: + return ret; +} + static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i, bool backoff) { @@ -301,7 +366,7 @@ static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, /* process the reloc's and patch up the cmdstream as needed: */ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *obj, - uint32_t offset, uint32_t nr_relocs, uint64_t relocs) + uint32_t offset, uint32_t nr_relocs, struct drm_msm_gem_submit_reloc *relocs) { uint32_t i, last_offset = 0; uint32_t *ptr; @@ -327,18 +392,11 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob } for (i = 0; i < nr_relocs; i++) { - struct drm_msm_gem_submit_reloc submit_reloc; - void __user *userptr = - u64_to_user_ptr(relocs + (i * sizeof(submit_reloc))); + struct drm_msm_gem_submit_reloc submit_reloc = relocs[i]; uint32_t off; uint64_t iova; bool valid; - if (copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc))) { - ret = -EFAULT; - goto out; - } - if (submit_reloc.submit_offset % 4) { DRM_ERROR("non-aligned reloc offset: %u\n", submit_reloc.submit_offset); @@ -694,6 +752,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; + ret = submit_lookup_cmds(submit, args, file); + if (ret) + goto out; + /* copy_*_user while holding a ww ticket upsets lockdep */ ww_acquire_init(&submit->ticket, &reservation_ww_class); has_ww_ticket = true; @@ -710,60 +772,29 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; for (i = 0; i < args->nr_cmds; i++) { - struct drm_msm_gem_submit_cmd submit_cmd; - void __user *userptr = - u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); struct msm_gem_object *msm_obj; uint64_t iova; - ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); - if (ret) { - ret = -EFAULT; - goto out; - } - - /* validate input from userspace: */ - switch (submit_cmd.type) { - case MSM_SUBMIT_CMD_BUF: - case MSM_SUBMIT_CMD_IB_TARGET_BUF: - case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - break; - default: - DRM_ERROR("invalid type: %08x\n", submit_cmd.type); - ret = -EINVAL; - goto out; - } - - ret = submit_bo(submit, submit_cmd.submit_idx, + ret = submit_bo(submit, submit->cmd[i].idx, &msm_obj, &iova, NULL); if (ret) goto out; - if (submit_cmd.size % 4) { - DRM_ERROR("non-aligned cmdstream buffer size: %u\n", - submit_cmd.size); + if (!submit->cmd[i].size || + ((submit->cmd[i].size + submit->cmd[i].offset) > + msm_obj->base.size / 4)) { + DRM_ERROR("invalid cmdstream size: %u\n", submit->cmd[i].size * 4); ret = -EINVAL; goto out; } - if (!submit_cmd.size || - ((submit_cmd.size + submit_cmd.submit_offset) > - msm_obj->base.size)) { - DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); - ret = -EINVAL; - goto out; - } - - submit->cmd[i].type = submit_cmd.type; - submit->cmd[i].size = submit_cmd.size / 4; - submit->cmd[i].iova = iova + submit_cmd.submit_offset; - submit->cmd[i].idx = submit_cmd.submit_idx; + submit->cmd[i].iova = iova + (submit->cmd[i].offset * 4); if (submit->valid) continue; - ret = submit_reloc(submit, msm_obj, submit_cmd.submit_offset, - submit_cmd.nr_relocs, submit_cmd.relocs); + ret = submit_reloc(submit, msm_obj, submit->cmd[i].offset * 4, + submit->cmd[i].nr_relocs, submit->cmd[i].relocs); if (ret) goto out; } From patchwork Fri Oct 23 16:51:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B597C388F9 for ; Fri, 23 Oct 2020 16:52:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D790320E65 for ; Fri, 23 Oct 2020 16:52:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AjUHdScR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752262AbgJWQwR (ORCPT ); Fri, 23 Oct 2020 12:52:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752745AbgJWQu3 (ORCPT ); Fri, 23 Oct 2020 12:50:29 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3364CC0613D2; Fri, 23 Oct 2020 09:50:29 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id 126so1367854pfu.4; Fri, 23 Oct 2020 09:50:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1nFWz7TauCZeK9jB7qRHq2/DLyA1MoAVvcU8Ul/qSVc=; b=AjUHdScRbMGE0mb7wPFvxl2OkcajrtiwNnNTYoW4VNn95/pHATQekqrYNMB35rNdfE DlyAKcTOQmVWtuWK/Y6dNzVHpNs/AxONzsZU9N1r6o+HAjZhr5WemklERI/8JkuGce49 bi9kLWgn7SozK0QUTp2+h3yppZ0RjcbVukc52uSwu3tMQ1VaJ6rwdzYTsbYtn5JrRziv RkTT6vH1NYnOSptD9tyzxkvDB9pM4LaFI80SR+PKOL/LAQMN5UodLG9jLoed+ptidhtB NnUViLYmbPs6hyl8BGTpdVXNlck7m3RAEOAh/x6/sB8IH7DVkxrn4nLhuAHZUrXzoF6F c52Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1nFWz7TauCZeK9jB7qRHq2/DLyA1MoAVvcU8Ul/qSVc=; b=lx8pyL3vm6k/QsLq4pcji5i5uWc1qv4dFgf/Q85CVRcFLY8hmCVdoJui3J09S0EmV2 ToVwnddR0wszlu+q6H6txKRpCAugBbZ9mnc73IGDkt0aIZfYEUVPGsONMSfQ1DABQZ/i mleZeBzldOffJSpVPb1KmMsy6DhjPFAaAEhMT981tCO5CKgOxX8htd7fEISyDblQNIIv aGR36o4m//rf4NPXfGoOWN8JqAnB4AP2vNcm0EyPhojzjddbXKiQKY5EfdQleoJCU/p/ g0IBqIY8kdzCvRNgXPumxmm2gV6lP21TEDHYoPr9sQVUKEVHTULndJV/YD23QJPjpMtk NYZA== X-Gm-Message-State: AOAM531kRtXfvqisPwY3u0029dO3ig1/56P2qcG/8uMme8gyV9xy0+q2 VNcAm5SG8/PJXqS0E6NlQQ8= X-Google-Smtp-Source: ABdhPJxClr5yl2UDdlPhpEcmq7MchF1xsGWuH2QJzB2FK+8JOjLnLaUbvyvCGaXCRQpd+dnyymFjuw== X-Received: by 2002:a62:1856:0:b029:155:1718:91a3 with SMTP id 83-20020a6218560000b0290155171891a3mr94730pfy.66.1603471828701; Fri, 23 Oct 2020 09:50:28 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id u14sm3443232pjf.53.2020.10.23.09.50.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 08/23] drm/msm: Do rpm get sooner in the submit path Date: Fri, 23 Oct 2020 09:51:09 -0700 Message-Id: <20201023165136.561680-9-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Unfortunately, due to an dev_pm_opp locking interaction with mm->mmap_sem, we need to do pm get before aquiring obj locks, otherwise we can have anger lockdep with the chain: opp_table_lock --> &mm->mmap_sem --> reservation_ww_class_mutex For an explicit fencing userspace, the impact should be minimal as we do all the fence waits before this point. It could result in some needless resumes in error cases, etc. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem_submit.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b6c258c89290..aa3c7af54079 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -750,11 +750,20 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, ret = submit_lookup_objects(submit, args, file); if (ret) - goto out; + goto out_pre_pm; ret = submit_lookup_cmds(submit, args, file); if (ret) - goto out; + goto out_pre_pm; + + /* + * Thanks to dev_pm_opp opp_table_lock interactions with mm->mmap_sem + * in the resume path, we need to to rpm get before we lock objs. + * Which unfortunately might involve powering up the GPU sooner than + * is necessary. But at least in the explicit fencing case, we will + * have already done all the fence waiting. + */ + pm_runtime_get_sync(&gpu->pdev->dev); /* copy_*_user while holding a ww ticket upsets lockdep */ ww_acquire_init(&submit->ticket, &reservation_ww_class); @@ -831,6 +840,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, out: + pm_runtime_put(&gpu->pdev->dev); +out_pre_pm: submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); From patchwork Fri Oct 23 16:51:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D234AC55179 for ; Fri, 23 Oct 2020 16:50:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72E0724640 for ; Fri, 23 Oct 2020 16:50:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="pHJb2poo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752752AbgJWQuc (ORCPT ); Fri, 23 Oct 2020 12:50:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752750AbgJWQub (ORCPT ); Fri, 23 Oct 2020 12:50:31 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A950C0613D4; Fri, 23 Oct 2020 09:50:31 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id t14so1706718pgg.1; Fri, 23 Oct 2020 09:50:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FKdXU4mLTFSbqW41cvebBKb564lzQJs1xHm1BiWuh4c=; b=pHJb2pooUwKzAgS/n6kq+a28P7o4mxIVO/JiYnFi9iF7qk5OjdwYv01d1Perppcimz EO9rgdR7hd/3NnBzQ6oZbJqQgVtCLiEqHwl0lbI0I4IlAxOs9C1l5GULp+ZtpLT0auIq 2K1i+Eh4KTUwGNxeIcdDV7m4QnJDB+6COztznsri1P0GnM9xagDMN/JCfB8M8uPvBSlw NlJJao6uFamN9MZ+mUujR+qdv0z7m4NiGs088D9Cv2r1myPOCa1+3wVG2I6pM4X228zn VQlAlwvvcNcaPzHEGqJ6ufs42bQnuS43R4M1Ofu7yU7ZzTHZl7d6Qf3z0C6RypKw6tbP aVgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FKdXU4mLTFSbqW41cvebBKb564lzQJs1xHm1BiWuh4c=; b=QFbxNwN3pftgby2bkIrka0XzI37ZmpINSOak8ES7glt8MuqrRYIlLVeDOJPejWeX4x eWjA6dJZJuPJKFSr7CZA4BinNWziaNlE+R0pPSI82AgsumYJ1IzvfI4QB9f5vHA2B9lX 02Kjz5pmeBAAGagVKOXbVZ7SaBNLMzegR3mzllcy+WEJmclq21YpO5xP/TbJ+xAjpFMn VgDApTt1Im3YiaXkE2AjGwPyxbArNwAs+eB6QFsjHYsQkwkYSUS7BocjDdaIHdbot9W4 wJm5kKqXbji1Cb1BhMgb3K6wTP8yidDMLlQc0pWoq/pR5B+Z8VH4J5tbD/PSwQ+C5Dc4 RYjA== X-Gm-Message-State: AOAM530E/N/7sZ2IWAWTupH5t6uXREsgHWifZoC2kyH6klLOVC/lxjhZ aY1QNQRZZVWcc7G2pL4FLGY= X-Google-Smtp-Source: ABdhPJxjUrlPkMiFvRkA6XECOEhffOZ0ew10qbhq3vjoc4KLF34I72bh7jDfbuQzcAUawURXBiCDkQ== X-Received: by 2002:a17:90a:ae16:: with SMTP id t22mr3426910pjq.55.1603471831026; Fri, 23 Oct 2020 09:50:31 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id p16sm2854398pfq.63.2020.10.23.09.50.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:30 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 09/23] drm/msm/gem: Switch over to obj->resv for locking Date: Fri, 23 Oct 2020 09:51:10 -0700 Message-Id: <20201023165136.561680-10-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This also converts the special msm_gem_get_vaddr_active() to expect the lock to already be held. There are two call-sites for this, one already has the lock held, so it is more straightforward to just open-code the locking for the other caller. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 17 +++++++---------- drivers/gpu/drm/msm/msm_gem.h | 16 +++++----------- drivers/gpu/drm/msm/msm_gem_submit.c | 8 ++++---- drivers/gpu/drm/msm/msm_gpu.c | 14 ++++++++++++-- drivers/gpu/drm/msm/msm_rd.c | 2 +- 5 files changed, 29 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 1195847714ba..17afa627ea3d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -644,13 +644,7 @@ void *msm_gem_get_vaddr(struct drm_gem_object *obj) */ void *msm_gem_get_vaddr_active(struct drm_gem_object *obj) { - void *ret; - - msm_gem_lock(obj); - ret = get_vaddr(obj, __MSM_MADV_PURGED); - msm_gem_unlock(obj); - - return ret; + return get_vaddr(obj, __MSM_MADV_PURGED); } void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) @@ -970,15 +964,20 @@ static void free_object(struct msm_gem_object *msm_obj) if (msm_obj->pages) kvfree(msm_obj->pages); + /* dma_buf_detach() grabs resv lock, so we need to unlock + * prior to drm_prime_gem_destroy + */ + msm_gem_unlock(obj); + drm_prime_gem_destroy(obj, msm_obj->sgt); } else { msm_gem_vunmap(obj); put_pages(obj); + msm_gem_unlock(obj); } drm_gem_object_release(obj); - msm_gem_unlock(obj); kfree(msm_obj); } @@ -1050,8 +1049,6 @@ static int msm_gem_new_impl(struct drm_device *dev, if (!msm_obj) return -ENOMEM; - mutex_init(&msm_obj->lock); - msm_obj->flags = flags; msm_obj->madv = MSM_MADV_WILLNEED; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 1f4e5d871a23..f0608d96ef03 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -85,7 +85,6 @@ struct msm_gem_object { * an IOMMU. Also used for stolen/splashscreen buffer. */ struct drm_mm_node *vram_node; - struct mutex lock; /* Protects resources associated with bo */ char name[32]; /* Identifier to print for the debugfs files */ @@ -158,36 +157,31 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); static inline void msm_gem_lock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + dma_resv_lock(obj->resv, NULL); } static inline bool __must_check msm_gem_trylock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_trylock(&msm_obj->lock) == 1; + return dma_resv_trylock(obj->resv); } static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_lock_interruptible(&msm_obj->lock); + return dma_resv_lock_interruptible(obj->resv, NULL); } static inline void msm_gem_unlock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_unlock(&msm_obj->lock); + dma_resv_unlock(obj->resv); } static inline bool msm_gem_is_locked(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_is_locked(&msm_obj->lock); + return dma_resv_is_locked(obj->resv); } static inline bool is_active(struct msm_gem_object *msm_obj) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index aa3c7af54079..044e9bee70a2 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -221,7 +221,7 @@ static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, struct msm_gem_object *msm_obj = submit->bos[i].obj; if (submit->bos[i].flags & BO_PINNED) - msm_gem_unpin_iova(&msm_obj->base, submit->aspace); + msm_gem_unpin_iova_locked(&msm_obj->base, submit->aspace); if (submit->bos[i].flags & BO_LOCKED) dma_resv_unlock(msm_obj->base.resv); @@ -324,7 +324,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) uint64_t iova; /* if locking succeeded, pin bo: */ - ret = msm_gem_get_and_pin_iova(&msm_obj->base, + ret = msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); if (ret) @@ -383,7 +383,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob /* For now, just map the entire thing. Eventually we probably * to do it page-by-page, w/ kmap() if not vmap()d.. */ - ptr = msm_gem_get_vaddr(&obj->base); + ptr = msm_gem_get_vaddr_locked(&obj->base); if (IS_ERR(ptr)) { ret = PTR_ERR(ptr); @@ -434,7 +434,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob } out: - msm_gem_put_vaddr(&obj->base); + msm_gem_put_vaddr_locked(&obj->base); return ret; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 55d16489d0f3..015f6b884e2e 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -326,7 +326,9 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (!state_bo->data) goto out; + msm_gem_lock(&obj->base); ptr = msm_gem_get_vaddr_active(&obj->base); + msm_gem_unlock(&obj->base); if (IS_ERR(ptr)) { kvfree(state_bo->data); state_bo->data = NULL; @@ -470,14 +472,22 @@ static void recover_worker(struct work_struct *work) put_task_struct(task); } + /* msm_rd_dump_submit() needs bo locked to dump: */ + for (i = 0; i < submit->nr_bos; i++) + msm_gem_lock(&submit->bos[i].obj->base); + if (comm && cmd) { DRM_DEV_ERROR(dev->dev, "%s: offending task: %s (%s)\n", gpu->name, comm, cmd); msm_rd_dump_submit(priv->hangrd, submit, "offending task: %s (%s)", comm, cmd); - } else + } else { msm_rd_dump_submit(priv->hangrd, submit, NULL); + } + + for (i = 0; i < submit->nr_bos; i++) + msm_gem_unlock(&submit->bos[i].obj->base); } /* Record the crash state */ @@ -784,7 +794,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); - msm_gem_get_and_pin_iova(&msm_obj->base, submit->aspace, &iova); + msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) dma_resv_add_excl_fence(drm_obj->resv, submit->fence); diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index fea30e7aa9e8..659e5cc4b40a 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -333,7 +333,7 @@ static void snapshot_buf(struct msm_rd_state *rd, rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size); - msm_gem_put_vaddr(&obj->base); + msm_gem_put_vaddr_locked(&obj->base); } /* called under struct_mutex */ From patchwork Fri Oct 23 16:51:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6640BC388F9 for ; Fri, 23 Oct 2020 16:52:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0AC82241A3 for ; Fri, 23 Oct 2020 16:52:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Muohvj7O" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752361AbgJWQwH (ORCPT ); Fri, 23 Oct 2020 12:52:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752262AbgJWQud (ORCPT ); Fri, 23 Oct 2020 12:50:33 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAC30C0613CE; Fri, 23 Oct 2020 09:50:33 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id 1so1221391ple.2; Fri, 23 Oct 2020 09:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rnfI6CR6MNrByW/oTFh3EaIyTTFNdfmHTDNbySXUNoA=; b=Muohvj7O0oK09MXn0QM6Uj4fGLynOfHZ9rnLrkOxQzUIA3YYK9s7E7k1pTfBfHXuKO Yz/5/9jNTYahvExGjQWsUYuvgS6AeviiDwKnZWOwDEzyoGJUN2sIgOLpttHk5Q6WIsY2 dpyeScX7H8jUAyYRaf6ng8TCF68laaV599dzjm3akNwtBrZKB2VzvQll3zb5rNth6das mezxpuYX5O+xo116WaqkTeQsRZQImTTPnvYoER3YQt2f0j+RvWqcubYe/FETfvPx4yqY IW/FW1rNAi87ixq1VzFAxNeFFdK0GsIyjiY49rVLyFojDe0buZhkeBxJS3M4iZ3x2WUg AImA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rnfI6CR6MNrByW/oTFh3EaIyTTFNdfmHTDNbySXUNoA=; b=k0tdJJSU4HXWGXF29IejaEiMDxKDRXEqFe0Y9ATwW10VcPrVV47tFYRplU9MOfTLfH hZ/1EjZA5+ksUxC8ffSwWxbLphvvXlDrVE99742oCVKcYnsMYBU6mjG6dXE0zzVuhm6n bMs+QTYIl/tdWPd2BcjkOcCTIEZ7iYQtRUvlq/2RJf3rwZyCT8a2wgrXzjvLjygfYUSi nCPmcYLN2WwPfAxIpgOena4IuaWs4k/W9Q/BLFgPzSiNKnB+WwBdTGvW1KzZTU6b4bzS yeXC8n+Pxcqmw9EtXqM2+IO6IkMzvb5h1EU7ZF4KPP2QeLZM2CsgmoPmuwcLpYrAuvSB RTWA== X-Gm-Message-State: AOAM532k+EPGzggs8AQOBU9CxwtjqYFo24M5yD3Ln6mJ4WJIZ7Xcw5hR KXRn2Un9joSYDmfHRpQQs5U= X-Google-Smtp-Source: ABdhPJyS77JBbY9pncch+9LvadZzVAZxGMDfX//aH2HbO3UEx0cugM4bjeMvIN+39aVdiYUVxVHhzw== X-Received: by 2002:a17:902:b785:b029:d3:d779:7806 with SMTP id e5-20020a170902b785b02900d3d7797806mr3210644pls.70.1603471833259; Fri, 23 Oct 2020 09:50:33 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id z18sm2622020pfn.158.2020.10.23.09.50.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 10/23] drm/msm: Use correct drm_gem_object_put() in fail case Date: Fri, 23 Oct 2020 09:51:11 -0700 Message-Id: <20201023165136.561680-11-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We only want to use the _unlocked() variant in the unlocked case. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 17afa627ea3d..992cda7e4995 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -1140,7 +1140,11 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, return obj; fail: - drm_gem_object_put(obj); + if (struct_mutex_locked) { + drm_gem_object_put_locked(obj); + } else { + drm_gem_object_put(obj); + } return ERR_PTR(ret); } From patchwork Fri Oct 23 16:51:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B22ABC388F9 for ; Fri, 23 Oct 2020 16:50:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53950241A4 for ; Fri, 23 Oct 2020 16:50:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jjTEZX/w" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752761AbgJWQui (ORCPT ); Fri, 23 Oct 2020 12:50:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752266AbgJWQug (ORCPT ); Fri, 23 Oct 2020 12:50:36 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2172FC0613CE; Fri, 23 Oct 2020 09:50:36 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id w21so1783664pfc.7; Fri, 23 Oct 2020 09:50:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KFzgRAweXzmn2MO9Ofz0hLeT7MUlIlm2XMWMHmibkzc=; b=jjTEZX/wTYdDD5lHNJL27W2xxKiFRbgTn9CZuGMxqBNE7aO6w+V5K1v5f/d5sc171m sZrosNh78sEixsH6tiGfDKMAddRKkWo2GvMDPqQVQd0G8yW56h+s/ilB0LdoDQErSi6I rwlGJIyIBK6hHlWpVj1wDpchhLUgP3IUBFI8ZGmQEqj+uLAdi02T7JPuO0rfuMQADbVi uhvQAcSztekRlvJ9tgRRtEsURgoONqDBknwcqD1cVa+UjPgCZNUgwb8YH+lhJ24pVDI3 9gam584zcrWq/gxASWZrj97X0xvOH7n080Pf+yvfqG7zQoXC65NG4Ymf1jPH/aojr9iI 2atw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KFzgRAweXzmn2MO9Ofz0hLeT7MUlIlm2XMWMHmibkzc=; b=X1dZ/Uj934JI60n5MlMkMr1NX+6f8W249cQkE1yGkQUbh9tuQ1Xef2QgLng5F3YnNC mTTeu6n8Bqhh+XSiM1CNs7NDAHQWOykSKUe1W9fX+adHSCJCSL7n7q5CQ/y8UZYnsD3b tJ5NPjT7LL+KfJ2KWOoJr39rqi0IzrRJDkOZgiTuUObWV38rZxvkyXVh+fzaoMIQuhVq JNbIoM5ZlmMu3WjctBHvHY913pFTinu2xX4zn4zRVCX/j6lE9Sq8BlomfZzzMH2Xezcf yASftiuqLj2Fsl7ORJi2CQsMwbW9OQDIVB7yF9GazGxqP3hWPvvVq++KQfTPeIx9p+5n /xjw== X-Gm-Message-State: AOAM530jwWqNo7xUNNXvO2HUWNRzdgSWM2Qs//F7Htt1tTHBHLa2QOJx byJhXDk9w2V8tQouERmR90I= X-Google-Smtp-Source: ABdhPJzFi3q0p9NxlvgnrXQCVcgYMewNZQIFbAEtQamzsp6p/YysaoGKemc8fdiLDn/MO8AmDlMZSA== X-Received: by 2002:a17:90a:c48:: with SMTP id u8mr3502279pje.121.1603471835689; Fri, 23 Oct 2020 09:50:35 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id z3sm2372338pgl.73.2020.10.23.09.50.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:34 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 11/23] drm/msm: Drop chatty trace Date: Fri, 23 Oct 2020 09:51:12 -0700 Message-Id: <20201023165136.561680-12-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It is somewhat redundant with the gpu tracepoints, and anyways not too useful to justify spamming the log when debug traces are enabled. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gpu.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 015f6b884e2e..ed6645aa0ae5 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -545,7 +545,6 @@ static void recover_worker(struct work_struct *work) static void hangcheck_timer_reset(struct msm_gpu *gpu) { - DBG("%s", gpu->name); mod_timer(&gpu->hangcheck_timer, round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); } From patchwork Fri Oct 23 16:51:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB772C4363A for ; Fri, 23 Oct 2020 16:52:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 52847241A4 for ; Fri, 23 Oct 2020 16:52:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bAvYCHQL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752770AbgJWQul (ORCPT ); Fri, 23 Oct 2020 12:50:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752765AbgJWQuj (ORCPT ); Fri, 23 Oct 2020 12:50:39 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A71D1C0613CE; Fri, 23 Oct 2020 09:50:38 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id l18so1712161pgg.0; Fri, 23 Oct 2020 09:50:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=26I0GFWyZ7wPcxO9wRYJvvFDOJC5QoDxFvw7rsNkagw=; b=bAvYCHQLAXBH4Li2fjviZnFfuyh5QV7rOWjlt+FKOk182BLH0t59Zr2k+Tg1oxuGI3 nPdPUyo3a/KquQZMNgQIAXNqoQfkObx9Po1EG7mJmmcQnUpW8pb2P1Gzg9HeLCxAaTQI SlMDG33ybbg7PXlFOGEy8uagRfvLI0G6hOEnlhfTmxm19QCeoX7fpNFcZxYL7BPCnzxN HTH7sSP1RjJt1TLUzyJJW/ELFdjmebIachviQcOQMhkF8SfSLT/pFlpU0YlmLRfyb4Yc ovyJK8CMvV3qAvKdpAWXr3p8oy/bV/1mwE/IFZEuWu3JkzFI2rJyle9ijYLre8OqPNoP 1dcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=26I0GFWyZ7wPcxO9wRYJvvFDOJC5QoDxFvw7rsNkagw=; b=hJNvHgQ5mmZglWMrL67Mu+Iodq5AWmoir4ILvh0+BZ4KXdu8uD0ZZDrnnltwc2afep a4PW5WXhf1Ik+RlIwcyf/MotGTl7228TareNz+2arjzNDBp1ZHf9KyOYo/uajj9pnfoR 0u1dxlqJbS3E9Mg+txl3sNQq8EEBoIZ4JXmhIt46MIK7voJjVKfshqp8FRm4QjUuLSu2 nWAW8xviFRzd4B/w58SflOeZ8vm8LRWdw6/Qx6s2nfs88x7GqEXbGQdwuDxE9ey57Y43 6bjOUAoAwZFTrtLVlVqZn533J17ynwZf3Fm+TIj8qJN6R1EM30vXTfWtF8ms02xwnJXS vn7g== X-Gm-Message-State: AOAM533JpSSFAV6AY9WWYfXlCUmaF+XnfJbXv5JqgyMRhggA+j6vGzlc HyfSsRkPv8KIDN9QqJOxgk8= X-Google-Smtp-Source: ABdhPJyUsaNtPjHATTB+2ZOVY+Nao/Y0EnlKYPkwKZu8lpK+OLGXDTE4cf1Q60otcuaivQXBSoyZ0g== X-Received: by 2002:a63:4765:: with SMTP id w37mr2721657pgk.332.1603471838125; Fri, 23 Oct 2020 09:50:38 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id v199sm2671446pfc.149.2020.10.23.09.50.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 12/23] drm/msm: Move update_fences() Date: Fri, 23 Oct 2020 09:51:13 -0700 Message-Id: <20201023165136.561680-13-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Small cleanup, update_fences() is used in the hangcheck path, but also in the normal retire path. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gpu.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index ed6645aa0ae5..1667d8066897 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -265,6 +265,20 @@ int msm_gpu_hw_init(struct msm_gpu *gpu) return ret; } +static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, + uint32_t fence) +{ + struct msm_gem_submit *submit; + + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno > fence) + break; + + msm_update_fence(submit->ring->fctx, + submit->fence->seqno); + } +} + #ifdef CONFIG_DEV_COREDUMP static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset, size_t count, void *data, size_t datalen) @@ -413,20 +427,6 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, * Hangcheck detection for locked gpu: */ -static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, - uint32_t fence) -{ - struct msm_gem_submit *submit; - - list_for_each_entry(submit, &ring->submits, node) { - if (submit->seqno > fence) - break; - - msm_update_fence(submit->ring->fctx, - submit->fence->seqno); - } -} - static struct msm_gem_submit * find_submit(struct msm_ringbuffer *ring, uint32_t fence) { From patchwork Fri Oct 23 16:51:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09D01C4363A for ; Fri, 23 Oct 2020 16:50:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A2DB72464E for ; Fri, 23 Oct 2020 16:50:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qQbbS80s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752776AbgJWQuo (ORCPT ); Fri, 23 Oct 2020 12:50:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752772AbgJWQul (ORCPT ); Fri, 23 Oct 2020 12:50:41 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C599C0613CE; Fri, 23 Oct 2020 09:50:41 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id 126so1368276pfu.4; Fri, 23 Oct 2020 09:50:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Nf/vn6DN/jdqsddFy69rFe9cIxF4FAPcekTDSdM1FpI=; b=qQbbS80sV6Zd8CuDlU0NTd5PzFxTEKHeVwZf9ZWXvqegG+lxuMfTyINhbWZjI5hdp6 f0GgxpIF3WwrKZ8wI8ft8PUt3YCys7Gddwrq0B/pG17btdlHe5DJ+8+7M1QDfoHjti6J EOFYOrAgv48yM3ceTnvlJ6o4qSxti2vU7BIK+Yt0brJPbDgJ9x2wyQteFz/Ix83A8q/C Lo+uaiyg/eqenbEn4I2EVRjqbtiY0qa6SzvBa76+j47XnuIaMvsDa7iB2JVudw0ytQD2 Cb+0vAuS/rgQW0lGeTS0CQZxWBeaaD4SXkp9EjaNphY/2TH7TGqYfzegkx2Oq4eCiY+y HojQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Nf/vn6DN/jdqsddFy69rFe9cIxF4FAPcekTDSdM1FpI=; b=ca8gK5RU30l9zOeTDLO0I+KYlh4j4/hPSYr58CK3iXr9YwNOvJQKqGMQvISNLzd1q/ T8gTmUHhFZLbhoFjHnXeOXycEvBw3FiNMoohFbVnoHkjOVGpjarZS2VINrIVu7rDPsP5 dLWy49yMHNSY9QYZvJM/qf8aHuHJfGgewa8PO7spS+uh+PQprulTnXvzoD2QZcxlocQa gfu958kZQpAzhVKg1SOFD16XLFCZOrMnpn9BFzEsi656IQneOAEYx319M77XPDnJCRCf e5EsssswWL2qUdRRKYayLwo937adYl03ZyeIdzZFzL8LYkJ9xXqzDDZMEGFiWQwPH9UK 1Bbg== X-Gm-Message-State: AOAM530MqrxoPGxny4XHPXASUSI3sC7tJC9bfXpAtqOg9HJPuUBMpfyJ hrggEfU0hn9CfOqP39le4RI= X-Google-Smtp-Source: ABdhPJyGax3yzmV7T0UXqi2Z5jO/4jj63xTSqb14hU0IJ0/QWqkEAfKFypxYiZXt0Atz616ZThADMg== X-Received: by 2002:a05:6a00:170a:b029:152:6881:5e2d with SMTP id h10-20020a056a00170ab029015268815e2dmr121187pfc.20.1603471840558; Fri, 23 Oct 2020 09:50:40 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id s25sm1885780pfh.194.2020.10.23.09.50.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:39 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 13/23] drm/msm: Add priv->mm_lock to protect active/inactive lists Date: Fri, 23 Oct 2020 09:51:14 -0700 Message-Id: <20201023165136.561680-14-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Rather than relying on the big dev->struct_mutex hammer, introduce a more specific lock for protecting the bo lists. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_debugfs.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.h | 13 +++++++++++- drivers/gpu/drm/msm/msm_gem.c | 28 +++++++++++++++----------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 12 +++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 5 ++++- 6 files changed, 58 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c index ee2e270f464c..64afbed89821 100644 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ b/drivers/gpu/drm/msm/msm_debugfs.c @@ -112,6 +112,11 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) { struct msm_drm_private *priv = dev->dev_private; struct msm_gpu *gpu = priv->gpu; + int ret; + + ret = mutex_lock_interruptible(&priv->mm_lock); + if (ret) + return ret; if (gpu) { seq_printf(m, "Active Objects (%s):\n", gpu->name); @@ -121,6 +126,8 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) seq_printf(m, "Inactive Objects:\n"); msm_gem_describe_objects(&priv->inactive_list, m); + mutex_unlock(&priv->mm_lock); + return 0; } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49685571dc0e..81cb2cecc829 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -441,6 +442,12 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) init_llist_head(&priv->free_list); INIT_LIST_HEAD(&priv->inactive_list); + mutex_init(&priv->mm_lock); + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&priv->mm_lock); + fs_reclaim_release(GFP_KERNEL); drm_mode_config_init(ddev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 79ee7d05b363..a17dadd38685 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -174,8 +174,19 @@ struct msm_drm_private { struct msm_rd_state *hangrd; /* debugfs to dump hanging submits */ struct msm_perf_state *perf; - /* list of GEM objects: */ + /* + * List of inactive GEM objects. Every bo is either in the inactive_list + * or gpu->active_list (for the gpu it is active on[1]) + * + * These lists are protected by mm_lock. If struct_mutex is involved, it + * should be aquired prior to mm_lock. One should *not* hold mm_lock in + * get_pages()/vmap()/etc paths, as they can trigger the shrinker. + * + * [1] if someone ever added support for the old 2d cores, there could be + * more than one gpu object + */ struct list_head inactive_list; + struct mutex mm_lock; /* worker for delayed free of objects: */ struct work_struct free_work; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 992cda7e4995..81ed03322a74 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -768,13 +768,17 @@ int msm_gem_sync_object(struct drm_gem_object *obj, void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + struct msm_drm_private *priv = obj->dev->dev_private; + + might_sleep(); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); + mutex_unlock(&priv->mm_lock); } } @@ -783,12 +787,14 @@ void msm_gem_active_put(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_drm_private *priv = obj->dev->dev_private; - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + might_sleep(); if (!atomic_dec_return(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); } } @@ -943,13 +949,16 @@ static void free_object(struct msm_gem_object *msm_obj) { struct drm_gem_object *obj = &msm_obj->base; struct drm_device *dev = obj->dev; + struct msm_drm_private *priv = dev->dev_private; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); + mutex_lock(&priv->mm_lock); list_del(&msm_obj->mm_list); + mutex_unlock(&priv->mm_lock); msm_gem_lock(obj); @@ -1128,14 +1137,9 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); } - if (struct_mutex_locked) { - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - } else { - mutex_lock(&dev->struct_mutex); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); - } + mutex_lock(&priv->mm_lock); + list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); return obj; @@ -1203,9 +1207,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, msm_gem_unlock(obj); - mutex_lock(&dev->struct_mutex); + mutex_lock(&priv->mm_lock); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); + mutex_unlock(&priv->mm_lock); return obj; diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 2dc0ffa925b4..6be073b8ca08 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -51,6 +51,8 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return 0; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (!msm_gem_trylock(&msm_obj->base)) continue; @@ -59,6 +61,8 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) msm_gem_unlock(&msm_obj->base); } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -78,6 +82,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return SHRINK_STOP; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; @@ -90,6 +96,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) msm_gem_unlock(&msm_obj->base); } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -112,6 +120,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) if (!msm_gem_shrinker_lock(dev, &unlock)) return NOTIFY_DONE; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (!msm_gem_trylock(&msm_obj->base)) continue; @@ -129,6 +139,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) break; } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 6c9e1fdc1a76..1806e87600c0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -94,7 +94,10 @@ struct msm_gpu { struct msm_ringbuffer *rb[MSM_GPU_MAX_RINGS]; int nr_rings; - /* list of GEM active objects: */ + /* + * List of GEM active objects on this gpu. Protected by + * msm_drm_private::mm_lock + */ struct list_head active_list; /* does gpu need hw_init? */ From patchwork Fri Oct 23 16:51:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1E84C388F9 for ; Fri, 23 Oct 2020 16:52:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 837DC241A4 for ; Fri, 23 Oct 2020 16:52:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QDM0eNzG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752788AbgJWQut (ORCPT ); Fri, 23 Oct 2020 12:50:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752750AbgJWQus (ORCPT ); Fri, 23 Oct 2020 12:50:48 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F6B4C0613CE; Fri, 23 Oct 2020 09:50:46 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id 19so1673233pge.12; Fri, 23 Oct 2020 09:50:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ROvGXTXV5iRZVbSVby/8TK0BFiX1W2qsGpGNQ6b5ENE=; b=QDM0eNzGyb+yFV42+dhVV+GpM3z/3cW0gDbGDjTIMzxd21TPdnaf/GJMjfhXu0s5Jp rlebbniN+Y0b+r2fCioQicjFgAwI/yJb6AtFomLROHqbuaCjWfhWapmPMsrHN4GW6kVJ mezMoRpheXGUNeX4WfWou77Ae11Dxl9lWaASkOm9Jhb4VTjfgOAhVCbri2eC+KGrj0Vj 0DgV7BlxxhN2FxJ+w3deDJwRdnJceXUrfTZDEBW2XVthg9Mgr4hLEEGusyh7AB57zeb0 Gxwo1wyNcm9xB6ADnxvst/ajQbzgOdKVBV3HfsMZKJ8tqaN5KzAOsDTqpWvTGSdu5YmC Kkzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ROvGXTXV5iRZVbSVby/8TK0BFiX1W2qsGpGNQ6b5ENE=; b=GLU1NaLBYtBmpfg3VFhIVxInyQTGLGvtTpyFnAmChXOGpVLpUXhLmAHeKKod9OQMlT BVOouX0LMhr+hPwqSkby9hCu/Rjl/A8UqTP3GnrOOrYHDBbeAYw5yITbxCs8onEnR2h+ 4vyuOCsVLXXb/mJeGhGBkHXbEzEeHMLSWmeMj/8f5BYMBa6C+Nj7W5Cb/1kAX7RzZclb LGXPckZ9+e3V4ziFqwkZQhbjE+W1D1m5qI2N9/0JzqZbiWIUnSo/cTrp7rVZRzAZAywn qCjlOVdKGXdjHLjtekb1a9LtG2rt4USp3qUYNvGrezWUESpNvIonx2ikZaOp0fh9inEJ 1TNw== X-Gm-Message-State: AOAM530ekiZFOW+cnJEkoWgv/B3I28qmkBIHDgt1tAcZu1ZqPxxwDcRl HFwnfoWiEyTFiUnBy4zA904= X-Google-Smtp-Source: ABdhPJx9y+L8TYuGxVQ7Ra3q1nEMc2p6MU+fIUgmsSvbW8Szs4dwCvnm+D+KBRtHF4B7ygm2hlruJQ== X-Received: by 2002:aa7:9575:0:b029:152:97f9:f884 with SMTP id x21-20020aa795750000b029015297f9f884mr88532pfq.80.1603471846145; Fri, 23 Oct 2020 09:50:46 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id k9sm2585582pgt.72.2020.10.23.09.50.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:44 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Eric Anholt , AngeloGioacchino Del Regno , Emil Velikov , Jonathan Marek , Sharat Masetty , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 14/23] drm/msm: Document and rename preempt_lock Date: Fri, 23 Oct 2020 09:51:15 -0700 Message-Id: <20201023165136.561680-15-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before adding another lock, give ring->lock a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 ++++++------ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 7 ++++++- 5 files changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 2180650a03bc..a1b9419a59c9 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -36,7 +36,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, OUT_RING(ring, upper_32_bits(shadowptr(a5xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -44,7 +44,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 7e04509c4e1f..183de1139eeb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -45,9 +45,9 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) if (!ring) return; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); gpu_write(gpu, REG_A5XX_CP_RB_WPTR, wptr); } @@ -62,9 +62,9 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu) bool empty; struct msm_ringbuffer *ring = gpu->rb[i]; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); empty = (get_wptr(ring) == ring->memptrs->rptr); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); if (!empty) return ring; @@ -132,9 +132,9 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu) } /* Make sure the wptr doesn't update while we're in motion */ - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Set the address of the incoming preemption record */ gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 16eaaf0804ca..eb44e0dbef34 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -65,7 +65,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) OUT_RING(ring, upper_32_bits(shadowptr(a6xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -73,7 +73,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 935bf9b1d941..1b6958e908dc 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,7 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); - spin_lock_init(&ring->lock); + spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 0987d6bf848c..4956d1bc5d0e 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -46,7 +46,12 @@ struct msm_ringbuffer { struct msm_rbmemptrs *memptrs; uint64_t memptrs_iova; struct msm_fence_context *fctx; - spinlock_t lock; + + /* + * preempt_lock protects preemption and serializes wptr updates against + * preemption. Can be aquired from irq context. + */ + spinlock_t preempt_lock; }; struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, From patchwork Fri Oct 23 16:51:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73EF3C55179 for ; Fri, 23 Oct 2020 16:52:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2268F241A4 for ; Fri, 23 Oct 2020 16:52:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nlevXG4z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S461416AbgJWQwA (ORCPT ); Fri, 23 Oct 2020 12:52:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752786AbgJWQut (ORCPT ); Fri, 23 Oct 2020 12:50:49 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1ED48C0613CE; Fri, 23 Oct 2020 09:50:49 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id l18so1712533pgg.0; Fri, 23 Oct 2020 09:50:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tZaC8+l7T3Nn2nFuFk4PV5zbgmowHmgwiXHAwS8vo7M=; b=nlevXG4zCVJI8V0UPq4ltCHXYoKr6nNJoyEhrMcrJLaeU54hj8E7fiztUQJY/UYKCH Mpi9Wr71K6eir4ZPqC0tGuE1tkpLcIc8wdMGj3ahZdnb32NUQHkYRb1GMmvEFn19LR/n K8AZbTTpRlmTwHURyhqVIScRHztSS0lDKdnAhTxDuhT0kB3HTnmvVPOIkTTteEVA/zfZ CUSaBjfrNUOT2CQn/Y+zzQRS6zxCdecdMpFT4V1q1TcHpDGaYmTAoOfCOWyD+81gugjF uuzJnFiA5giDRwbL8PZ6Wnzg9qDC5LTACPD+OXDmEJwjcE6shYHUP20W8Nqs2vF6PvM4 cnqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tZaC8+l7T3Nn2nFuFk4PV5zbgmowHmgwiXHAwS8vo7M=; b=HcjHVtdeU5OW/8b9XNxZt/WkhCi6QjK2+DeAXXhv3rHGEjCEmROgbSQk5ccx4mTu2t 7nqJY4J480b5xtSKg/AjzFmQvifXyGMNEtt8U3p3Qz/0N0LPam6RdCqaCT/NNKsu4M1y ODcNQsrkdi54Afsia6D2boTwFbQkhcKE5CSBnUoBxeRYsmD95T5erCB8yfCiMgMInoU4 OG96vHJDI96h0NtabE1PjHJHW/MihCDi77rmG5JiKpT1Y+4Z7UDm9gX5QV7ReyQ/9i/v +k4C1GSE0Y+/AxWlMqXY59cRpZz2b0rKua9+0rshOA1TmqGKSEcuahPhYfxwfuxwITV2 4OMw== X-Gm-Message-State: AOAM532Mp/GUU/A4YDPtIN5Oh5z/clrvQHIONscX30sBxkLMCMxQaZNu BDHTLftNT0GNQcgObQKaECw= X-Google-Smtp-Source: ABdhPJz96X6LN2zav2wrpyLoty6EfWTFbVNefeVp8ZTXAb/lEIaTBZ5Dr55kZA6q2CbWuMnsH+mr9Q== X-Received: by 2002:a63:3c5c:: with SMTP id i28mr2870584pgn.166.1603471848608; Fri, 23 Oct 2020 09:50:48 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id b20sm2517848pft.55.2020.10.23.09.50.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 15/23] drm/msm: Protect ring->submits with it's own lock Date: Fri, 23 Oct 2020 09:51:16 -0700 Message-Id: <20201023165136.561680-16-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark One less place to rely on dev->struct_mutex. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem_submit.c | 2 ++ drivers/gpu/drm/msm/msm_gpu.c | 37 ++++++++++++++++++++++------ drivers/gpu/drm/msm/msm_ringbuffer.c | 1 + drivers/gpu/drm/msm/msm_ringbuffer.h | 6 +++++ 4 files changed, 39 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 044e9bee70a2..24ce4c65429d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -65,7 +65,9 @@ void msm_gem_submit_free(struct msm_gem_submit *submit) unsigned i; dma_fence_put(submit->fence); + spin_lock(&submit->ring->submit_lock); list_del(&submit->node); + spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 1667d8066897..1d6f3dc3fe78 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -270,6 +270,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, { struct msm_gem_submit *submit; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) { if (submit->seqno > fence) break; @@ -277,6 +278,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_update_fence(submit->ring->fctx, submit->fence->seqno); } + spin_unlock(&ring->submit_lock); } #ifdef CONFIG_DEV_COREDUMP @@ -432,11 +434,14 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence) { struct msm_gem_submit *submit; - WARN_ON(!mutex_is_locked(&ring->gpu->dev->struct_mutex)); - - list_for_each_entry(submit, &ring->submits, node) - if (submit->seqno == fence) + spin_lock(&ring->submit_lock); + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno == fence) { + spin_unlock(&ring->submit_lock); return submit; + } + } + spin_unlock(&ring->submit_lock); return NULL; } @@ -533,8 +538,10 @@ static void recover_worker(struct work_struct *work) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) gpu->funcs->submit(gpu, submit); + spin_unlock(&ring->submit_lock); } } @@ -721,7 +728,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { struct drm_device *dev = gpu->dev; - struct msm_gem_submit *submit, *tmp; int i; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); @@ -730,9 +736,24 @@ static void retire_submits(struct msm_gpu *gpu) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; - list_for_each_entry_safe(submit, tmp, &ring->submits, node) { - if (dma_fence_is_signaled(submit->fence)) + while (true) { + struct msm_gem_submit *submit = NULL; + + spin_lock(&ring->submit_lock); + submit = list_first_entry_or_null(&ring->submits, + struct msm_gem_submit, node); + spin_unlock(&ring->submit_lock); + + /* + * If no submit, we are done. If submit->fence hasn't + * been signalled, then later submits are not signalled + * either, so we are also done. + */ + if (submit && dma_fence_is_signaled(submit->fence)) { retire_submit(gpu, ring, submit); + } else { + break; + } } } } @@ -775,7 +796,9 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; + spin_lock(&ring->submit_lock); list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); msm_rd_dump_submit(priv->rd, submit, NULL); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 1b6958e908dc..4d2a2a4abef8 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,6 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); + spin_lock_init(&ring->submit_lock); spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 4956d1bc5d0e..fe55d4a1aa16 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -39,7 +39,13 @@ struct msm_ringbuffer { int id; struct drm_gem_object *bo; uint32_t *start, *end, *cur, *next; + + /* + * List of in-flight submits on this ring. Protected by submit_lock. + */ struct list_head submits; + spinlock_t submit_lock; + uint64_t iova; uint32_t seqno; uint32_t hangcheck_fence; From patchwork Fri Oct 23 16:51:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FCB6C388F9 for ; Fri, 23 Oct 2020 16:50:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A29B9241A4 for ; Fri, 23 Oct 2020 16:50:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="rIiKuuNV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752796AbgJWQux (ORCPT ); Fri, 23 Oct 2020 12:50:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752791AbgJWQuv (ORCPT ); Fri, 23 Oct 2020 12:50:51 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47776C0613D2; Fri, 23 Oct 2020 09:50:51 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id j18so1807141pfa.0; Fri, 23 Oct 2020 09:50:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GKxFRnjWbzeynLjiAycb1bjQvjSpLQB6o5QawmkEOkA=; b=rIiKuuNVvOgMEozh4x6fmQlP5rcSe4l3M7Ynk4bzHcFIzKYXi48PE/YOJ7fyUv1Gjm Vbfp0+kL8aGEF1jVEOr8FTZ3EtHt9nHleQuiIFUuEGTMdYG/Flksc36R/VQvBXb8+jZ8 jhkiU+3uV0jwo02K36IWwL1+FLnN2HFwIjr8PyNRCbT5H62xKj9OhOk77vr4UQYfxfBv qRoGgPMEak8JfXysY4PGG/VhZS88yYKFijKvAA0O1EM26TrqiLHlite0H18FXKwnH6br uWd+TJCriUHlui0qg4DXChSEkHc0Q19ZD1PnnLF1plLU3WmbxN50YcntFCMym1CA6lEi yqOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GKxFRnjWbzeynLjiAycb1bjQvjSpLQB6o5QawmkEOkA=; b=q7Dpn38YNCSBze0xSYcn/Wr2XUqOiF6RVcBt8A34fv7GJnkaVnkpdEwNMXtg8Wwn5f iOpsgKmp8N08AT++wgy7XKdXellSdAt9eJJ/Zzhw96scYj+nqawlTqPCqsbP6XJeK3l0 XYgkCtN7FqY57H/f7i56aJ8oCYUAqPOx31JLorreHtp5hZdLhNFToMnPTLrwQ5bBCORI tkLdNYVp5ET+93xDbu5iq9Vr55y07II6pZq9CNy9gSAZBwNUUZqFnh9Kj+ks9kYlWNwl 32E69413uZrF0VIYXyecnYL0/ph5/ZTfxtFOMFBM/xS1QI46/7R7oS0lUVsslI+7o2n1 DObg== X-Gm-Message-State: AOAM533rm2AGRL0x6r8CryTahxhDMgoY6xvzidy37MS/vQAie2ZsU0L/ 4867ZJx/dNSHWsI0SWNoj1c= X-Google-Smtp-Source: ABdhPJwRvknrForq9c0tmRaatGiXS5uxRbvnYUkSD0tCwEvqSkO/ctwT1h0MipvB2W4kg657j9Wtcw== X-Received: by 2002:a63:3004:: with SMTP id w4mr2714510pgw.249.1603471850767; Fri, 23 Oct 2020 09:50:50 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id e6sm2684025pfn.190.2020.10.23.09.50.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:49 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 16/23] drm/msm: Refcount submits Date: Fri, 23 Oct 2020 09:51:17 -0700 Message-Id: <20201023165136.561680-17-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before we remove dev->struct_mutex from the retire path, we have to deal with the situation of a submit retiring before the submit ioctl returns. To deal with this, ring->submits will hold a reference to the submit, which is dropped when the submit is retired. And the submit ioctl path holds it's own ref, which it drops when it is done with the submit. Also, add to submit list *after* getting/pinning bo's, to prevent badness in case the completed fence is corrupted, and retire_worker mistakenly believes the submit is done too early. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_drv.h | 1 - drivers/gpu/drm/msm/msm_gem.h | 13 +++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 11 +++++------ drivers/gpu/drm/msm/msm_gpu.c | 21 ++++++++++++++++----- 4 files changed, 34 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index a17dadd38685..2ef5cff19883 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -277,7 +277,6 @@ void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); bool msm_use_mmu(struct drm_device *dev); -void msm_gem_submit_free(struct msm_gem_submit *submit); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f0608d96ef03..2f289c436ddd 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -213,6 +213,7 @@ void msm_gem_free_work(struct work_struct *work); * lasts for the duration of the submit-ioctl. */ struct msm_gem_submit { + struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; struct msm_gem_address_space *aspace; @@ -249,6 +250,18 @@ struct msm_gem_submit { } bos[]; }; +void __msm_gem_submit_destroy(struct kref *kref); + +static inline void msm_gem_submit_get(struct msm_gem_submit *submit) +{ + kref_get(&submit->ref); +} + +static inline void msm_gem_submit_put(struct msm_gem_submit *submit) +{ + kref_put(&submit->ref, __msm_gem_submit_destroy); +} + /* helper to determine of a buffer in submit should be dumped, used for both * devcoredump and debugfs cmdstream dumping: */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 24ce4c65429d..d04c349d8112 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -42,6 +42,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, if (!submit) return NULL; + kref_init(&submit->ref); submit->dev = dev; submit->aspace = queue->ctx->aspace; submit->gpu = gpu; @@ -60,14 +61,13 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, return submit; } -void msm_gem_submit_free(struct msm_gem_submit *submit) +void __msm_gem_submit_destroy(struct kref *kref) { + struct msm_gem_submit *submit = + container_of(kref, struct msm_gem_submit, ref); unsigned i; dma_fence_put(submit->fence); - spin_lock(&submit->ring->submit_lock); - list_del(&submit->node); - spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); @@ -847,8 +847,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); - if (ret) - msm_gem_submit_free(submit); + msm_gem_submit_put(submit); out_unlock: if (ret && (out_fence_fd >= 0)) put_unused_fd(out_fence_fd); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 1d6f3dc3fe78..bcd9b4fa98b2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -722,7 +722,12 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, pm_runtime_mark_last_busy(&gpu->pdev->dev); pm_runtime_put_autosuspend(&gpu->pdev->dev); - msm_gem_submit_free(submit); + + spin_lock(&ring->submit_lock); + list_del(&submit->node); + spin_unlock(&ring->submit_lock); + + msm_gem_submit_put(submit); } static void retire_submits(struct msm_gpu *gpu) @@ -796,10 +801,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; - spin_lock(&ring->submit_lock); - list_add_tail(&submit->node, &ring->submits); - spin_unlock(&ring->submit_lock); - msm_rd_dump_submit(priv->rd, submit, NULL); update_sw_cntrs(gpu); @@ -826,6 +827,16 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) msm_gem_active_get(drm_obj, gpu); } + /* + * ring->submits holds a ref to the submit, to deal with the case + * that a submit completes before msm_ioctl_gem_submit() returns. + */ + msm_gem_submit_get(submit); + + spin_lock(&ring->submit_lock); + list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); + gpu->funcs->submit(gpu, submit); priv->lastctx = submit->queue->ctx; From patchwork Fri Oct 23 16:51:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD986C5517A for ; Fri, 23 Oct 2020 16:51:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 860F924655 for ; Fri, 23 Oct 2020 16:51:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LpeSyaXQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752804AbgJWQvA (ORCPT ); Fri, 23 Oct 2020 12:51:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752266AbgJWQuz (ORCPT ); Fri, 23 Oct 2020 12:50:55 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFECBC0613CE; Fri, 23 Oct 2020 09:50:53 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id o3so1675392pgr.11; Fri, 23 Oct 2020 09:50:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T65l5OzdCWVlcYo2A+cV1OI3UJqt/gtMxWAEJ0nkpyI=; b=LpeSyaXQ5Uq5TNwOlPK9WX4WMRNr+SFQfDimiv4wY/ZkutkYyhIs/pnisIqt5SIL4y KZanpAVtzXDdblPb9Lc/fRTJMCsN1QOF/roXuJMbH3DwWHYsERxwjGWjvViaV5pRxa3G y6dcBMWgh0zYn2MhxBRmH/IlA4NIkssyGKs8rZcBwR4irJwJ7Wrw+GRyND8p8NQEQxnO Z774gM5m1yHUU4/T5/7CSbJDESSF7vOxqHcpBBY1VzEOd/KejiVZsCHRoSTQgjAPCqPY 4lHvnAQZ2LZcrlhMbq4cpaRiV9lz5/KvBxgGmo4zo1A3ONz36Y2n6rQryupz7mXhcXp5 OSTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T65l5OzdCWVlcYo2A+cV1OI3UJqt/gtMxWAEJ0nkpyI=; b=B1f608sm7p39yu2YRHd1Bqy4MhjaBaSry9eAU2UcOMp/3KXXAPbsMntHdVWn5/aMwl CJ7cvApFjT21OsjtCBkP/rPtCeOJ6SsmVJocTm53slI7Sf522RMbea9SPOS4oHtC8Dve OvN64glPcXnVaPAtTtqTVB74VIsp5u0BMtS9NCbKDpv//3C1yZTl6P5B36baPzXoMYct jGrgl4ukAeI2gY0pxeLUxpk1NvQVcd/ekLyQCNB1U/i5LhJqOWl40m1ony0hyo4nGRFV T/O73dvp+0NnchcwongyTaIrLaOVpQiCsjHgZ336lxALmTCpHUxDBPnIAa9lmqKDs70p pXPw== X-Gm-Message-State: AOAM533okRh9l7IP8Lf8QOmE/AUIi5kXPpI6psaN/IcJAqF5ahI/5R2D 0jxkqy3WtJMXD2qlYPVP7IxuvhmwaJhlCQ== X-Google-Smtp-Source: ABdhPJxdsGn4DcW3sIu1T8u4cxzkDAUGi6q8In/53gQmu61hevJk4ZwWopThRrd5CvYRNlJCcvNw9g== X-Received: by 2002:a62:ce08:0:b029:156:4427:4b29 with SMTP id y8-20020a62ce080000b029015644274b29mr3112775pfg.70.1603471853312; Fri, 23 Oct 2020 09:50:53 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t6sm2661170pfl.50.2020.10.23.09.50.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 17/23] drm/msm: Remove obj->gpu Date: Fri, 23 Oct 2020 09:51:18 -0700 Message-Id: <20201023165136.561680-18-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It cannot be atomically updated with obj->active_count, and the only purpose is a useless WARN_ON() (which becomes a buggy WARN_ON() once retire_submits() is not serialized with incoming submits via struct_mutex) Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 2 -- drivers/gpu/drm/msm/msm_gem.h | 1 - drivers/gpu/drm/msm/msm_gpu.c | 5 ----- 3 files changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 81ed03322a74..b9de19462109 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -775,7 +775,6 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) if (!atomic_fetch_inc(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); mutex_unlock(&priv->mm_lock); @@ -791,7 +790,6 @@ void msm_gem_active_put(struct drm_gem_object *obj) if (!atomic_dec_return(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); mutex_unlock(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 2f289c436ddd..f4e73c6f07bf 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -64,7 +64,6 @@ struct msm_gem_object { * */ struct list_head mm_list; - struct msm_gpu *gpu; /* non-null if active */ /* Transiently in the process of submit ioctl, objects associated * with the submit are on submit->bo_list.. this only lasts for diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index bcd9b4fa98b2..d0f625112a97 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -810,11 +810,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct drm_gem_object *drm_obj = &msm_obj->base; uint64_t iova; - /* can't happen yet.. but when we add 2d support we'll have - * to deal w/ cross-ring synchronization: - */ - WARN_ON(is_active(msm_obj) && (msm_obj->gpu != gpu)); - /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); From patchwork Fri Oct 23 16:51:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50DCBC4363A for ; Fri, 23 Oct 2020 16:51:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4B13241A3 for ; Fri, 23 Oct 2020 16:51:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DMTsSiYJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752809AbgJWQvC (ORCPT ); Fri, 23 Oct 2020 12:51:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752800AbgJWQu4 (ORCPT ); Fri, 23 Oct 2020 12:50:56 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 084CBC0613CE; Fri, 23 Oct 2020 09:50:56 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id r3so1168890plo.1; Fri, 23 Oct 2020 09:50:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Df41UALTBav1hw0+WsIt5rNabe6ifDpe4GxP6jKIpnQ=; b=DMTsSiYJK25SGd/0+1hIyuxSfglDtzT9Hkhj/i/mQWML3GxA41gGn7MWti3yhltgdp 3WKCth+stns7x8gcM/2vkx5Ma8KYb69VIwWUJboD6znalztOJMBNstPI7VqO3t5/6ZSi lDZghHso4lJJS/fzeTuRGhK9zp8GUJ8hVa5vu1M+pYsZzP6SzKHUvcSh5uxNqwC86MhC uZDYTLbpMBNtcLaNYzVhUFrQg4udllwyN0Zg38GVe8mI5XcOSHoU9Z6aQASMsyoxCgFk O/UjpkAGM7goFs2v2VpxePXXGty3j2p5COhddrlA2sp2lsCFOsYXAgD45qeaM8wGTt// 3qOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Df41UALTBav1hw0+WsIt5rNabe6ifDpe4GxP6jKIpnQ=; b=k1qUjS9bUh693LJjllXaWrm6SfDQiOPDxN27QDgCa5jBVTZb+40Qn5ZU7ipbPANAHe QhAScrVJWuma2dZ5DtaHThlvA7y9+k+BkoHsK7gjkwxtHZRJeMrMM1W4Bgn16a9kMOAf CsKaMWyEc/9UJPQhlSM8zIJrog79Nd05B4Vv9u5Da4x26ipJfTNWS161eu+UvgUZTW+B Wxc7sE59L/yzTm268FIde0N6t1hb7njXduAKLfc7YvPr9SYL+2CADdKtakn3hPKkNpZo jNsUwFKlVaTvJpyuqtMeAr7ko8d0D42s3PkpeXMrmH9t5m6xcf7+VpeTM+H9l4jIqPfd /xvQ== X-Gm-Message-State: AOAM530p5+nusjFBT7Nn4XIJxO7ukly82qzc6VPx/gxosAkgl57CWRxC 2Smi7OMWg2H9yTIhb/UyQB4= X-Google-Smtp-Source: ABdhPJwacuI4x3hfJSueVi7qQmkhVY3B1dqPHxRyOQifYIc3coSseB3KJNP3ta+ecucFUC9MjAfAFA== X-Received: by 2002:a17:902:b945:b029:d5:f385:237e with SMTP id h5-20020a170902b945b02900d5f385237emr3183603pls.84.1603471855522; Fri, 23 Oct 2020 09:50:55 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id n12sm3100042pjt.16.2020.10.23.09.50.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:54 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Jordan Crouse , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 18/23] drm/msm: Drop struct_mutex from the retire path Date: Fri, 23 Oct 2020 09:51:19 -0700 Message-Id: <20201023165136.561680-19-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we are not relying on dev->struct_mutex to protect the ring->submits lists, drop the struct_mutex lock. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gpu.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d0f625112a97..30ba3beaad0a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -717,7 +717,7 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_gem_active_put(&msm_obj->base); msm_gem_unpin_iova(&msm_obj->base, submit->aspace); - drm_gem_object_put_locked(&msm_obj->base); + drm_gem_object_put(&msm_obj->base); } pm_runtime_mark_last_busy(&gpu->pdev->dev); @@ -732,11 +732,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { - struct drm_device *dev = gpu->dev; int i; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* Retire the commits starting with highest priority */ for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; @@ -766,15 +763,12 @@ static void retire_submits(struct msm_gpu *gpu) static void retire_worker(struct work_struct *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, retire_work); - struct drm_device *dev = gpu->dev; int i; for (i = 0; i < gpu->nr_rings; i++) update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence); - mutex_lock(&dev->struct_mutex); retire_submits(gpu); - mutex_unlock(&dev->struct_mutex); } /* call from irq handler to schedule work to retire bo's */ From patchwork Fri Oct 23 16:51:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E874AC4363A for ; Fri, 23 Oct 2020 16:51:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 92D1D24671 for ; Fri, 23 Oct 2020 16:51:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="R6xegtTE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S461198AbgJWQvu (ORCPT ); Fri, 23 Oct 2020 12:51:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752803AbgJWQvC (ORCPT ); Fri, 23 Oct 2020 12:51:02 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ABF0C0613D2; Fri, 23 Oct 2020 09:50:58 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id 19so1673676pge.12; Fri, 23 Oct 2020 09:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T3n/W2hBHdYFkNX33oVXA/c4AmlMUnQ6/c07ibiZgNA=; b=R6xegtTENd9s9cRtgkN8d2Fcnu/DqggarHuf3eV++GtmVU1V0WM2Y1/05cF4lhfE0W G5n3o6YcFhTZ/Zu5LiHjnEjyyqxGVP0aH8fsBfmDxImp08C0R1y3ZGq/skOJqZFya2TB jz4Pi+dOugOOVlbK4sKKF3lqgM0haUR4aw8ca6CMG6VnZKRnQ3+EbaUBHWAAnV1VigMZ 6dAt2p9bsa27X/bF7oVjb691ev0jizEYw1E/KGkUDD9aPwVijwxvnLYHlJHqYscAzzwj of1jyhs5jjZC9JKZJHzrMX1CnHcyZ9ZCHYS2+TnQ/eREl5t72c9gfBgt4y39/KBau4IO 2CDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T3n/W2hBHdYFkNX33oVXA/c4AmlMUnQ6/c07ibiZgNA=; b=dc2gRva0EBmZxzuqWJlI1O8KECX31KvAUmCSmA8g3j25pcagraQWuheHmj3roj/h0o Z8fI61RyRmnH16MEa9z2obn0ysKV1jEk93oICgooMksuP5DCDuaKXn9Zz99sS5l/znaD nCWsaA9/TMZi4XmbsruYHYO+I/NjMGKm03amL6RRZFHs0JEiE64tQ3m+8gYqbqbJWKNm b4T/y/ZhyaZh+FUmrGtNW1FHiDe/3EcU0Q4aeuVh11+1sChqPCdmchKolxXfKbcYbEQR i2JkpfZXmyB/5FceFBV4QayCPkoXg73trb68ea/8ckCivlMva9BfTyLvokGX9L5x9KcU McQg== X-Gm-Message-State: AOAM531znvif7Fr1XNFSdGq12Yr8pU6YkDk1pE/xi2TlaMg56Nl9ZU6I TvkUT8aADRfuLvCIcm17rYo= X-Google-Smtp-Source: ABdhPJzIt2rBdXxh/2GoY4AOMK55pcFLFr/zrcSvKk+/BeuQVksIgw5DuPtrN6AF4BTLVGrbii/BWg== X-Received: by 2002:a63:4083:: with SMTP id n125mr2869692pga.290.1603471858103; Fri, 23 Oct 2020 09:50:58 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id 204sm2658740pfz.74.2020.10.23.09.50.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 19/23] drm/msm: Drop struct_mutex in free_object() path Date: Fri, 23 Oct 2020 09:51:20 -0700 Message-Id: <20201023165136.561680-20-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that active_list/inactive_list is protected by mm_lock, we no longer need dev->struct_mutex in the free_object() path. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b9de19462109..5d555750943e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -949,8 +949,6 @@ static void free_object(struct msm_gem_object *msm_obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -992,20 +990,14 @@ void msm_gem_free_work(struct work_struct *work) { struct msm_drm_private *priv = container_of(work, struct msm_drm_private, free_work); - struct drm_device *dev = priv->dev; struct llist_node *freed; struct msm_gem_object *msm_obj, *next; while ((freed = llist_del_all(&priv->free_list))) { - - mutex_lock(&dev->struct_mutex); - llist_for_each_entry_safe(msm_obj, next, freed, freed) free_object(msm_obj); - mutex_unlock(&dev->struct_mutex); - if (need_resched()) break; } From patchwork Fri Oct 23 16:51:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58D6CC4363A for ; Fri, 23 Oct 2020 16:52:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1490241A4 for ; Fri, 23 Oct 2020 16:51:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uA55/QFU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751453AbgJWQvv (ORCPT ); Fri, 23 Oct 2020 12:51:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752805AbgJWQvC (ORCPT ); Fri, 23 Oct 2020 12:51:02 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00B28C0613CE; Fri, 23 Oct 2020 09:51:01 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id x13so1777155pfa.9; Fri, 23 Oct 2020 09:51:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LTE3gomliTKXhUDnwYUTR88VGBt7xpw3Jdu/tPpRNR0=; b=uA55/QFU1AlBeLiQ9adAMOFw9ajUbsEOqjSI+scsnB37kktWMHBndl1pgaUIrcvsH7 2L3DtnASK2qK3RAOAUh9Qw0/Nl61rZqEhece2Jmzz+KacvX+VV2csTE/WXoxYbPO6a2y mVxVXRKjTJSLanFDCIWr3IIWTGQLsPv+luQtcq9ZFttwJgxDu7L7MTcJthBQE1RroXl3 XKVAMwY5MK8uUI0PKgazIEen/l24zQwx0dRovrv/bfuFVjo73qaUYpsgO6EOBZ9PIjvT kuv8MeSVDgzqh4Avj7OlKQzCfExyUNJQv127VxGlsMMtvjsRp7UHlCd0ct8fZpSK29HR XJGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LTE3gomliTKXhUDnwYUTR88VGBt7xpw3Jdu/tPpRNR0=; b=k2GIh76FQ1fbTak7qZ5DqeOZ1tZcnK8roD3z2BAkNxjrigWhDxmXde7+iUup7BJ9qG 0p9z/aJlrE7sNZfze2ZwDf+KBcc8sNYA6m5l/iLLO/rI0Kdi9CD940lE+AdWNpXqrbcV muaeZb26yavBxac/pI4GyOYlRFnSYWwPNz7YnPQTT4pC+FsInIxrqRz96Ai1KlyUk+t5 nZc/yjrEKXtPX4UP/IyD0Iftd7s/A7QJxKrtmjfs0BjviC+TqMOSFNE/YNFTqsR46rG6 /6nKlmrEWc4ykXGCTZHaFeM146gwjx4kdU7kjO6JCewdTofOowent6uHyHcMZsqlg5er 6G8A== X-Gm-Message-State: AOAM531GVAm5ekJdDYDs6m7IYUgm5G7W8yJ2L1XU3s6q+RYMlZ/4Qlq0 +sA8F9T07tCxvJt/45/RuePj5HBuxvzq/Q== X-Google-Smtp-Source: ABdhPJzNZLIVfnD1VopgXy+WWAzsweLsxqXsr6Ufhbs+CxyY4SQdl1PPw1AYtDHC2gyT+5fKToGp4A== X-Received: by 2002:a05:6a00:2cf:b029:160:c0c:a95c with SMTP id b15-20020a056a0002cfb02901600c0ca95cmr94764pft.76.1603471860491; Fri, 23 Oct 2020 09:51:00 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id c12sm2434995pgd.57.2020.10.23.09.50.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:50:59 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 20/23] drm/msm: Remove msm_gem_free_work Date: Fri, 23 Oct 2020 09:51:21 -0700 Message-Id: <20201023165136.561680-21-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we don't need struct_mutex in the free path, we can get rid of the asynchronous free altogether. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_drv.c | 3 --- drivers/gpu/drm/msm/msm_drv.h | 5 ----- drivers/gpu/drm/msm/msm_gem.c | 27 --------------------------- drivers/gpu/drm/msm/msm_gem.h | 1 - 4 files changed, 36 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 81cb2cecc829..49e6daf30b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -438,9 +438,6 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) priv->wq = alloc_ordered_workqueue("msm", 0); - INIT_WORK(&priv->free_work, msm_gem_free_work); - init_llist_head(&priv->free_list); - INIT_LIST_HEAD(&priv->inactive_list); mutex_init(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 2ef5cff19883..af296712eae8 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -188,10 +188,6 @@ struct msm_drm_private { struct list_head inactive_list; struct mutex mm_lock; - /* worker for delayed free of objects: */ - struct work_struct free_work; - struct llist_head free_list; - struct workqueue_struct *wq; unsigned int num_planes; @@ -291,7 +287,6 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -void msm_gem_free_work(struct work_struct *work); int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct msm_gem_address_space *aspace); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5d555750943e..00b4788366c5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -939,16 +939,6 @@ void msm_gem_free_object(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - if (llist_add(&msm_obj->freed, &priv->free_list)) - queue_work(priv->wq, &priv->free_work); -} - -static void free_object(struct msm_gem_object *msm_obj) -{ - struct drm_gem_object *obj = &msm_obj->base; - struct drm_device *dev = obj->dev; - struct msm_drm_private *priv = dev->dev_private; - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -986,23 +976,6 @@ static void free_object(struct msm_gem_object *msm_obj) kfree(msm_obj); } -void msm_gem_free_work(struct work_struct *work) -{ - struct msm_drm_private *priv = - container_of(work, struct msm_drm_private, free_work); - struct llist_node *freed; - struct msm_gem_object *msm_obj, *next; - - while ((freed = llist_del_all(&priv->free_list))) { - llist_for_each_entry_safe(msm_obj, next, - freed, freed) - free_object(msm_obj); - - if (need_resched()) - break; - } -} - /* convenience method to construct a GEM buffer object, and userspace handle */ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f4e73c6f07bf..ffa2130ee97d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -204,7 +204,6 @@ static inline bool is_vunmapable(struct msm_gem_object *msm_obj) void msm_gem_purge(struct drm_gem_object *obj); void msm_gem_vunmap(struct drm_gem_object *obj); -void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, * associated with the cmdstream submission for synchronization (and From patchwork Fri Oct 23 16:51:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC9F8C4363A for ; Fri, 23 Oct 2020 16:51:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 805E92463D for ; Fri, 23 Oct 2020 16:51:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d7ZrPrRw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S375660AbgJWQvo (ORCPT ); Fri, 23 Oct 2020 12:51:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752812AbgJWQvD (ORCPT ); Fri, 23 Oct 2020 12:51:03 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C62F0C0613CE; Fri, 23 Oct 2020 09:51:03 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id 1so1222170ple.2; Fri, 23 Oct 2020 09:51:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9sfqxF9mk3Lc51iY+H1cSzrjPU+7CCgm4eEof0V4Vlw=; b=d7ZrPrRwGpKkt+z7k4ZEXHpsy9Z5QSGLOfa5ygon/JcL0Ctxqj15MGvdmRnYZGZA1v e6jYLqNgKMjJLSiPfZfpjxrAnKM2YlQ5eEP7z1JYjUVpwHxZ9b+LRUQzUpkn/orihx6R XQpnlJz6OVr1LDAvbDOfrZ8y1AP6MHrXzqo+/c4uG2OqB2akmssxlReze2seZiC3EAAR l/VNy0/G75zBLIqvJz3+i112P4jKWvLKBFadZ1QsNRMm6rs6Mw+n2883wpSR+iuCkDIv n5+TLfJnT6rpmz13kWVMZHYDaz4m/L0cz6JzM5nvA2A2OpEcacpaLliUonvaA4NEnKLW tzpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9sfqxF9mk3Lc51iY+H1cSzrjPU+7CCgm4eEof0V4Vlw=; b=GpNoyqJnavXmZPor2jW5Dnidz8IbJ/DeUp4TQG2RV+13KtSLzCQHPGsZhBBodUCKJ/ W9fWK8J0FGQJb10MG0zZMZ1UiiTG7S5+bXRZEqsvxXLvObpe1YWojx7FzleZlCppKWby FRJUFQqQjDMMl9Yjh6xlcrlyC/UZ7tFd5kIueGRqrsPIJ+7Qg6TBhB3X4aNlrnT1THlx 8QrqWmb26d/IaMumWEcUHb2gA+3hcVVuKGaEOPhmkMZ0EMXcNNxamfPjjKBUpzOcBm6F HpR7ru6Lpr2fFfWZTv3wn1ekBHJGx3L+X2Gm9jDRkTmqt5bmGZa76uIb3JXURVwpUckG IJ+Q== X-Gm-Message-State: AOAM532nFtm1MNB2UF5AjzqCOilUOvnDE8seCitEBN/0IviDIADl2gON AGTGdN2L7flKfWa3V1+oCsk= X-Google-Smtp-Source: ABdhPJzb6WAjkYuIX030962Q3iyTELJ0B0vy/zbByHllvY0Q2Vr/alj/EQmOhN0lsrhZMtghqKvLeg== X-Received: by 2002:a17:902:a612:b029:d6:6ae:4d47 with SMTP id u18-20020a170902a612b02900d606ae4d47mr17667plq.50.1603471863282; Fri, 23 Oct 2020 09:51:03 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id i17sm2618735pfa.183.2020.10.23.09.51.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:51:02 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 21/23] drm/msm: Drop struct_mutex in madvise path Date: Fri, 23 Oct 2020 09:51:22 -0700 Message-Id: <20201023165136.561680-22-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The obj->lock is sufficient for what we need. This *does* have the implication that userspace can try to shoot themselves in the foot by racing madvise(DONTNEED) with submit. But the result will be about the same if they did madvise(DONTNEED) before the submit ioctl, ie. they might not get want they want if they race with shrinker. But iova fault handling is robust enough, and userspace is only shooting it's own foot. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_drv.c | 11 ++--------- drivers/gpu/drm/msm/msm_gem.c | 4 +--- drivers/gpu/drm/msm/msm_gem.h | 2 -- 3 files changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49e6daf30b42..f2d58fe25497 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -912,14 +912,9 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, return -EINVAL; } - ret = mutex_lock_interruptible(&dev->struct_mutex); - if (ret) - return ret; - obj = drm_gem_object_lookup(file, args->handle); if (!obj) { - ret = -ENOENT; - goto unlock; + return -ENOENT; } ret = msm_gem_madvise(obj, args->madv); @@ -928,10 +923,8 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, ret = 0; } - drm_gem_object_put_locked(obj); + drm_gem_object_put(obj); -unlock: - mutex_unlock(&dev->struct_mutex); return ret; } diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 00b4788366c5..7f24a58c0390 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -673,8 +673,6 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) msm_gem_lock(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); - if (msm_obj->madv != __MSM_MADV_PURGED) msm_obj->madv = madv; @@ -691,7 +689,6 @@ void msm_gem_purge(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); @@ -771,6 +768,7 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) struct msm_drm_private *priv = obj->dev->dev_private; might_sleep(); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ffa2130ee97d..d79e7019cc88 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -190,8 +190,6 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); - WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; } From patchwork Fri Oct 23 16:51:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A01AC4363A for ; Fri, 23 Oct 2020 16:51:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7516241A3 for ; Fri, 23 Oct 2020 16:51:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uMQ6ut+2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752822AbgJWQvK (ORCPT ); Fri, 23 Oct 2020 12:51:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752815AbgJWQvG (ORCPT ); Fri, 23 Oct 2020 12:51:06 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9362CC0613CE; Fri, 23 Oct 2020 09:51:06 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id r3so1169136plo.1; Fri, 23 Oct 2020 09:51:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+y45oOjahWFnPOijBG1RjZS6QNlundMFracTuBu5nFw=; b=uMQ6ut+28KSF/d2T/mMdJ9gJ+8d0lrPiTStP+3woilItyiOZZ+r8H/knJ9o7W3QXJR Vsm9yZ2ZQmpraVcnAgf15A6pDwhxpStJBm0KRI4pBiccBslqt6RTNXlSUVqv30AeGoL5 gufnon++t2BS5uvWQeTO+uGtsObDF37a/ClsEQ99LVrkJm8P8t/Cc2QDlPRo16bY0jZq EeIHIXOgp9ynXWTHgHj7U0ujdPWiNC+eFwz7gryGWbgkQd51AnN7sPhPX3MCEUbkcTfs 2SkqnXwqwP2AbeU5Jxmilvxx1PeLFmShMp480020rPQ67WZin0pPp9E/y2SAaA66jAwJ hkZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+y45oOjahWFnPOijBG1RjZS6QNlundMFracTuBu5nFw=; b=LMdbs89aZ6q2/oLskNt8jkxCHApf1TGEJ6kW2VBqx5v8/EroEqvqCyLliplH5kfyqC 11V1Q6mGsadUssuJAXl/oIE5ojvjRVnPQ4379APzjuZyDifBTgspZPrndsuuS/e7BE4Q 5sWGrx65FDeARyCqw9Lac7zH0YHO7ZQ7wXSXkcp094y+/8jDGDYfRlzJlYZ1IgpPczSJ DOyg5FRvmM/DQ2L/YZvP0V0j7Vd3GBVy24vK03qG38hGI/XiyLC11UqSLWr4tUx73WSR BAcPYU1hqGXIPeW1g9sd8lZp5f8oWUTEOf8u9N6lAN4R/z+ArkdEP136odFPHPELk3lR FcUA== X-Gm-Message-State: AOAM530A8UZi8DbIzStM+BxjnR2qFWIboX2wxdYPOUSvHvWTyu2EGqGE rZkexVvRRLvI+7R2uMPKzsg= X-Google-Smtp-Source: ABdhPJzOrhH9Lf+kx9ky/4JoNzEVfS+XfiI/Gs5d2XPCWsxzUeVmUKjNOksMwmGIUuQhpinAivxxgA== X-Received: by 2002:a17:902:a609:b029:d5:dde6:f135 with SMTP id u9-20020a170902a609b02900d5dde6f135mr3248918plq.75.1603471866097; Fri, 23 Oct 2020 09:51:06 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id o13sm3023420pjq.19.2020.10.23.09.51.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:51:04 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 22/23] drm/msm: Drop struct_mutex in shrinker path Date: Fri, 23 Oct 2020 09:51:23 -0700 Message-Id: <20201023165136.561680-23-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that the inactive_list is protected by mm_lock, and everything else on per-obj basis is protected by obj->lock, we no longer depend on struct_mutex. Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem.c | 1 - drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 -------------------------- 2 files changed, 55 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 7f24a58c0390..a9a834bb7794 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -688,7 +688,6 @@ void msm_gem_purge(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 6be073b8ca08..6f4b1355725f 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -8,48 +8,13 @@ #include "msm_gem.h" #include "msm_gpu_trace.h" -static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) -{ - /* NOTE: we are *closer* to being able to get rid of - * mutex_trylock_recursive().. the msm_gem code itself does - * not need struct_mutex, although codepaths that can trigger - * shrinker are still called in code-paths that hold the - * struct_mutex. - * - * Also, msm_obj->madv is protected by struct_mutex. - * - * The next step is probably split out a seperate lock for - * protecting inactive_list, so that shrinker does not need - * struct_mutex. - */ - switch (mutex_trylock_recursive(&dev->struct_mutex)) { - case MUTEX_TRYLOCK_FAILED: - return false; - - case MUTEX_TRYLOCK_SUCCESS: - *unlock = true; - return true; - - case MUTEX_TRYLOCK_RECURSIVE: - *unlock = false; - return true; - } - - BUG(); -} - static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long count = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return 0; mutex_lock(&priv->mm_lock); @@ -63,9 +28,6 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - return count; } @@ -74,13 +36,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long freed = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return SHRINK_STOP; mutex_lock(&priv->mm_lock); @@ -98,9 +55,6 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); @@ -112,13 +66,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) { struct msm_drm_private *priv = container_of(nb, struct msm_drm_private, vmap_notifier); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned unmapped = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return NOTIFY_DONE; mutex_lock(&priv->mm_lock); @@ -141,9 +90,6 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - *(unsigned long *)ptr += unmapped; if (unmapped > 0) From patchwork Fri Oct 23 16:51:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE53C55179 for ; Fri, 23 Oct 2020 16:51:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD4F024671 for ; Fri, 23 Oct 2020 16:51:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sXtjrbzf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752824AbgJWQvL (ORCPT ); Fri, 23 Oct 2020 12:51:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752816AbgJWQvJ (ORCPT ); Fri, 23 Oct 2020 12:51:09 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DE04C0613D2; Fri, 23 Oct 2020 09:51:09 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id r3so1169212plo.1; Fri, 23 Oct 2020 09:51:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JZButkyF/GaInfh9Hmb9ywY7tT0fYvah1q73Sv4+lT8=; b=sXtjrbzfkYQvrfjr3zL0uuJzC9ek8b3OGqkGU2HW1MwDpUR0gg8uMK9y5Lm+q9dOGE 6o/bQ15eQR3VoOwnM+A01v7WAuxAi8l4FVoSFW/hZD2FonDlwEDPXR4btVVboQJSdcWn Y4oekw90qn7FGh+zbLuk6WAgP0cgOw7DGn3lw6cOg8qwYkpB7u2IykkHJ4zdWfBDeuEQ C+tXnng8tPUQMfUNHx/1ijmRc2WVx5tdK8MDRHUKEOiXQ7oNy9TNJ2uoGniIVTl3VJ66 rRHagoab/ceV0VUBgEteX6PzcYosNyRx9tnaTrmogo8Ks9uMkCvJC/clViKNKmTB6NCD czvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JZButkyF/GaInfh9Hmb9ywY7tT0fYvah1q73Sv4+lT8=; b=FaNDORYyZ3Tvz6GsyUOlFaRegwYe8lSxhjfUKpACSxf/8AGWNMKo91zsoXwKkn0ZZI 8ejj+VHU2Mn8lLxxE7FiOP0dUQ+L9vaJbkWFfv02k+lSOF9kt31+6ggOHfRgIP4Onstk dkyhEWnWUBGsdSUIMJQhmtzN0LgV5wT0ZF2mc7VAL+XeMTQifZhbA2O3AHHyErlndUwu yqWp9vxrs/7GaTMrQrYk9VVv9uT2MPbz169MCJXiJr9/BFRcUsSKPXjw2f9kMndBWHEe BRNL/+1P0n+xX+bHAHxojwH0BPi6m+rV31H3xSFeD+bVHy+4hLjAQ7q0ZYyqMpU/fuvF u7Mw== X-Gm-Message-State: AOAM531K2kX+bZAkqEHrowU7SMJGdl4u0m9lmlggYiSJVKxLMyGDAcrP Ktj0RYIvB+LW2LPWFjlX4no= X-Google-Smtp-Source: ABdhPJz7W7twine0CZiWQPxl1uDo6NOAvvpvr9BrcmqP82wpMVrayYrN0o3GK0pR8TO+1aMw0Xjuow== X-Received: by 2002:a17:902:ab92:b029:d5:d5f4:5118 with SMTP id f18-20020a170902ab92b02900d5d5f45118mr3297409plr.21.1603471868616; Fri, 23 Oct 2020 09:51:08 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id j8sm2696025pfr.121.2020.10.23.09.51.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 09:51:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Kristian H . Kristensen" , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 23/23] drm/msm: Don't implicit-sync if only a single ring Date: Fri, 23 Oct 2020 09:51:24 -0700 Message-Id: <20201023165136.561680-24-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201023165136.561680-1-robdclark@gmail.com> References: <20201023165136.561680-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark If there is only a single ring (no-preemption), everything is FIFO order and there is no need to implicit-sync. Mesa should probably just always use MSM_SUBMIT_NO_IMPLICIT, as behavior is undefined when fences are not used to synchronize buffer usage across contexts (which is the only case where multiple different priority rings could come into play). Signed-off-by: Rob Clark Reviewed-by: Kristian H. Kristensen --- drivers/gpu/drm/msm/msm_gem_submit.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index d04c349d8112..b6babc7f9bb8 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -283,7 +283,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) return ret; } -static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) +static int submit_fence_sync(struct msm_gem_submit *submit, bool implicit_sync) { int i, ret = 0; @@ -303,7 +303,7 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; } - if (no_implicit) + if (!implicit_sync) continue; ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx, @@ -774,7 +774,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - ret = submit_fence_sync(submit, !!(args->flags & MSM_SUBMIT_NO_IMPLICIT)); + ret = submit_fence_sync(submit, (gpu->nr_rings > 1) && + !(args->flags & MSM_SUBMIT_NO_IMPLICIT)); if (ret) goto out;