From patchwork Mon Oct 19 20:46:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 482E6C43467 for ; Mon, 19 Oct 2020 20:45:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE6B822403 for ; Mon, 19 Oct 2020 20:45:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SEExezQX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727536AbgJSUp3 (ORCPT ); Mon, 19 Oct 2020 16:45:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727057AbgJSUp2 (ORCPT ); Mon, 19 Oct 2020 16:45:28 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4444DC0613CE; Mon, 19 Oct 2020 13:45:28 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id a200so664657pfa.10; Mon, 19 Oct 2020 13:45:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=epLimpxjmfg48A6fvNBDGIJDBN6SDQ0BNDdD7VaQX6I=; b=SEExezQXrfvDTHgWSccdcp/gvDOfomSvUx/EhmMYXS0IDlZooJhzWpqEis4XXOLtbI 9z7PBHzJI2tIapiVEJh0y75p5MD/7qVLgz/3evgEF9BzYaZznACfuijXTZ9amq3yqp4d UXi6O7HupM3BSyoH42XKCyvoPZUpWHuyis51qegcOwIjvWXRYJ8MfK8oeDNQSa9clwHC LXwq+PqiniCiFGFpkOis2gHCJBDxt/goRNqpPWlFkPoRVpvS2iRnp6P+x07W13ghgJpI dI7Y5OBExERVce0bWgQYQyKCxNk1CbH9IpmNhtwZsYnbjT6OnwuvBo/sfJ2mOJq6pOq0 XNWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=epLimpxjmfg48A6fvNBDGIJDBN6SDQ0BNDdD7VaQX6I=; b=YxHsj1S0EjGVX4mxJF84lcsy7zryLXjzosHEHVKCVMpoP6FvSXNarW5jjjVYzNIOb3 ooL9HA49y6NonFKHfku1WqKk/p9Ple2HXdgWX6BzNy7hhDRLWkLSc0J6UlWPLdaqU2ub P9rX31jUHt0StH24ERwjLPvCSS4ERP/mVosWM5tomeEZUPLRr5ld4SY49z1mhCIEMuTK kPIbzYxfIIl6V/rHwStElo9GtDfWzwGA4vJfTkOs0pALxUGPE2SLJQt+YzOSwCxzIytI 4tER0n4IHC02fNCTsXdSBaPMWc4F3FgbUoaay4uYL+M+WrWYiMZZ4Eyzc9edFoa3eo4/ oJiQ== X-Gm-Message-State: AOAM53125lyJIl8cM8oxCTzByXLUyJg2TFZpWrfqKMRfrajqWIk6PGgj AexDCwoq1NBewSvqQZsT420= X-Google-Smtp-Source: ABdhPJzysRtYdn1XV3CyDHBkM3cPbjAkrpWwlxIPIFOXO8xM5YwGuzDxw5OO3Dxg//q+GIixmaFcOw== X-Received: by 2002:a63:7009:: with SMTP id l9mr1231146pgc.199.1603140327785; Mon, 19 Oct 2020 13:45:27 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id mn11sm327738pjb.19.2020.10.19.13.45.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , Jordan Crouse , Bjorn Andersson , Eric Anholt , Emil Velikov , AngeloGioacchino Del Regno , "Gustavo A. R. Silva" , Jonathan Marek , Akhil P Oommen , Sharat Masetty , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 01/23] drm/msm: Fix a couple incorrect usages of get_vaddr_active() Date: Mon, 19 Oct 2020 13:46:02 -0700 Message-Id: <20201019204636.139997-2-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The microcode bo's should never be madvise(WONTNEED), so these should not be using msm_gem_get_vaddr_active(). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index d6804a802355..b2593c6bd2ac 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -426,7 +426,7 @@ static int a5xx_preempt_start(struct msm_gpu *gpu) static void a5xx_ucode_check_version(struct a5xx_gpu *a5xx_gpu, struct drm_gem_object *obj) { - u32 *buf = msm_gem_get_vaddr_active(obj); + u32 *buf = msm_gem_get_vaddr(obj); if (IS_ERR(buf)) return; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 948f3656c20c..0894703a742e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -522,7 +522,7 @@ static int a6xx_cp_init(struct msm_gpu *gpu) static void a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, struct drm_gem_object *obj) { - u32 *buf = msm_gem_get_vaddr_active(obj); + u32 *buf = msm_gem_get_vaddr(obj); if (IS_ERR(buf)) return; From patchwork Mon Oct 19 20:46:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C740C43457 for ; Mon, 19 Oct 2020 20:45:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 950DF22400 for ; Mon, 19 Oct 2020 20:45:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="G0ZnwW1s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727850AbgJSUpd (ORCPT ); Mon, 19 Oct 2020 16:45:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727057AbgJSUpc (ORCPT ); Mon, 19 Oct 2020 16:45:32 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDB6FC0613CE; Mon, 19 Oct 2020 13:45:30 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id a200so664711pfa.10; Mon, 19 Oct 2020 13:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=N4MJy1bUVBg4Uoq6rWvQpmAg9Tlj+L8XO2hEQrVc/oM=; b=G0ZnwW1sjoIjGVTL7cNoShRRn1RtiWlnzQJ5p1b4SbcfYd/+30XZKtD08OFJrRe8xS tpr5Ufz5ICTvTAxh82kWnsALUd2ismmc5ykNjDQcLh7luo+QBhEwWrkh3UCH4/ibSP9l 2GDXHIsH40tZruSN1pQhRYg424gHNoQBDvMt+AtDus2NMw3icPI27gPsRrru5NgQ3257 78rtWkKXJ2DQ6sasTJx4tUtmP9lJd5bpu7jgg23gKImBf3pMSc9ky5MtnsugrRMs4KIT kfGtiCdgxMSGceyO/ShIkeIkQFq+EiE1CIr2DmXcCQ8YlVct87c1Bfd+YTZvWTxf8bHq Eiqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=N4MJy1bUVBg4Uoq6rWvQpmAg9Tlj+L8XO2hEQrVc/oM=; b=P5s2nbjkS77fntU7UJkQru5f0+P/E/hcLrXmHopiKWica8duQF8BW5Wr9wA+VFBxu4 irjtNGeXypd2zaPEd2lGumSk2bb0tAVS9Km4DTOTMGbSbVYtkT2GlD0zwCrrLLom0pVm Yz38KIGUOyyZGN8UHMdU5GpLK2ZtTWNkFkdGHDMEfl0IT5/RWg1wXhFA+pvZW9mm9rIT qaGrm7DZEbzRs/5YjPxMvezedCvWYKUhXJBqGauG0/OGCrSwUYHt41gWC+EoBnhmL0n0 8xpg5IxMdAVqqnMJb5Cp7QRrOXh/1PIKl6eJIVrF6TojRcy9NrrCI2tY6C/ChhmmeAYf UIFA== X-Gm-Message-State: AOAM531ScPc9dxD2n4iqD7/3DPx4Nn+VdZJPq/NziKu8b/DCYKVKI5bd DQaTWExLkcoiL2euDRlhsAo= X-Google-Smtp-Source: ABdhPJwLrstK2u0urDtxmJrlsy4AQgWdWsVWyp8Wn9DKSOzTpHNJjHoSe52rHU/IxlfQa5Vf56LfPw== X-Received: by 2002:a62:e915:0:b029:156:551a:c948 with SMTP id j21-20020a62e9150000b0290156551ac948mr1377979pfh.50.1603140330152; Mon, 19 Oct 2020 13:45:30 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t15sm330914pjy.33.2020.10.19.13.45.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:28 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 02/23] drm/msm/gem: Add obj->lock wrappers Date: Mon, 19 Oct 2020 13:46:03 -0700 Message-Id: <20201019204636.139997-3-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This will make it easier to transition over to obj->resv locking for everything that is per-bo locking. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 99 ++++++++++++++++------------------- drivers/gpu/drm/msm/msm_gem.h | 28 ++++++++++ 2 files changed, 74 insertions(+), 53 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 14e14caf90f9..afef9c6b1a1c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -178,15 +178,15 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **p; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } p = get_pages(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return p; } @@ -252,14 +252,14 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) * vm_ops.open/drm_gem_mmap_obj and close get and put * a reference on obj. So, we dont need to hold one here. */ - err = mutex_lock_interruptible(&msm_obj->lock); + err = msm_gem_lock_interruptible(obj); if (err) { ret = VM_FAULT_NOPAGE; goto out; } if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return VM_FAULT_SIGBUS; } @@ -280,7 +280,7 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) ret = vmf_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV)); out_unlock: - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); out: return ret; } @@ -289,10 +289,9 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) static uint64_t mmap_offset(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; - struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); /* Make it mmapable */ ret = drm_gem_create_mmap_offset(obj); @@ -308,11 +307,10 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) { uint64_t offset; - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); offset = mmap_offset(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return offset; } @@ -322,7 +320,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) @@ -341,7 +339,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace == aspace) @@ -360,14 +358,14 @@ static void del_vma(struct msm_gem_vma *vma) kfree(vma); } -/* Called with msm_obj->lock locked */ +/* Called with msm_obj locked */ static void put_iova(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma, *tmp; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->aspace) { @@ -382,11 +380,10 @@ static int msm_gem_get_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; int ret = 0; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); @@ -421,7 +418,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (msm_obj->flags & MSM_BO_MAP_PRIV) prot |= IOMMU_PRIV; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; @@ -446,11 +443,10 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); u64 local; int ret; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); ret = msm_gem_get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -461,7 +457,7 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, if (!ret) *iova = local; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ret; } @@ -479,12 +475,11 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); ret = msm_gem_get_iova_locked(obj, aspace, iova, 0, U64_MAX); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ret; } @@ -495,12 +490,11 @@ int msm_gem_get_iova(struct drm_gem_object *obj, uint64_t msm_gem_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = lookup_vma(obj, aspace); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); WARN_ON(!vma); return vma ? vma->iova : 0; @@ -514,16 +508,15 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = lookup_vma(obj, aspace); if (!WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, @@ -564,20 +557,20 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) if (obj->import_attach) return ERR_PTR(-ENODEV); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); if (WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } /* increment vmap_count *before* vmap() call, so shrinker can - * check vmap_count (is_vunmapable()) outside of msm_obj->lock. + * check vmap_count (is_vunmapable()) outside of msm_obj lock. * This guarantees that we won't try to msm_gem_vunmap() this * same object from within the vmap() call (while we already - * hold msm_obj->lock) + * hold msm_obj lock) */ msm_obj->vmap_count++; @@ -595,12 +588,12 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) } } - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return msm_obj->vaddr; fail: msm_obj->vmap_count--; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(ret); } @@ -624,10 +617,10 @@ void msm_gem_put_vaddr(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); WARN_ON(msm_obj->vmap_count < 1); msm_obj->vmap_count--; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } /* Update madvise status, returns true if not purged, else @@ -637,7 +630,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); @@ -646,7 +639,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) madv = msm_obj->madv; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return (madv != __MSM_MADV_PURGED); } @@ -683,14 +676,14 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } static void msm_gem_vunmap_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); if (!msm_obj->vaddr || WARN_ON(!is_vunmapable(msm_obj))) return; @@ -705,7 +698,7 @@ void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass) mutex_lock_nested(&msm_obj->lock, subclass); msm_gem_vunmap_locked(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } /* must be called before _move_to_active().. */ @@ -816,7 +809,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); switch (msm_obj->madv) { case __MSM_MADV_PURGED: @@ -884,7 +877,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) describe_fence(fence, "Exclusive", m); rcu_read_unlock(); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) @@ -929,7 +922,7 @@ static void free_object(struct msm_gem_object *msm_obj) list_del(&msm_obj->mm_list); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); put_iova(obj); @@ -950,7 +943,7 @@ static void free_object(struct msm_gem_object *msm_obj) drm_gem_object_release(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); kfree(msm_obj); } @@ -1070,10 +1063,10 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, struct msm_gem_vma *vma; struct page **pages; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = add_vma(obj, NULL); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); if (IS_ERR(vma)) { ret = PTR_ERR(vma); goto fail; @@ -1157,22 +1150,22 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, npages = size / PAGE_SIZE; msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); msm_obj->sgt = sgt; msm_obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); if (!msm_obj->pages) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); ret = -ENOMEM; goto fail; } ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages); if (ret) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); goto fail; } - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); mutex_lock(&dev->struct_mutex); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a1bf741b9b89..f6482154e8bb 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,6 +93,34 @@ struct msm_gem_object { }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +static inline void +msm_gem_lock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_lock(&msm_obj->lock); +} + +static inline int +msm_gem_lock_interruptible(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_lock_interruptible(&msm_obj->lock); +} + +static inline void +msm_gem_unlock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_unlock(&msm_obj->lock); +} + +static inline bool +msm_gem_is_locked(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_is_locked(&msm_obj->lock); +} + static inline bool is_active(struct msm_gem_object *msm_obj) { return atomic_read(&msm_obj->active_count); From patchwork Mon Oct 19 20:46:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56190C433DF for ; Mon, 19 Oct 2020 20:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1493A22400 for ; Mon, 19 Oct 2020 20:45:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GiwH1Q5B" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728027AbgJSUpj (ORCPT ); Mon, 19 Oct 2020 16:45:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727849AbgJSUpd (ORCPT ); Mon, 19 Oct 2020 16:45:33 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57F17C0613CE; Mon, 19 Oct 2020 13:45:33 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id c20so674837pfr.8; Mon, 19 Oct 2020 13:45:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CYBCVrY685HpPEa6dE7fFTDE7n8ofYv7TLo5Wicy7lE=; b=GiwH1Q5B4PiL5LYkHEor5z49PWGBBdaa+huKsZYFYRdx2+L6Ry5cf3kcdcS0yDJbkP DrDGo3ozr/wLE7s0yZGHCiPlQ3BhFs4s1JnbWUHnejHFSFwZYukMSRwFoZYfCXDRgfKq STIdnMVI/xY4J3LN1yUgzA1kuL4arQzVScCAuQEe+bl7c1A5R66X/dj0rbfEHRs445Rb FPcYwWe2bwxl+5cGq50w6XwiyF09cbpKhuxxa7qd4WqimLsDhdt7nOv48f3GO1xfslbv T22HVH4qwjgbmYrQjwRS4SbpXnhP5SbuF9ZD16xvAoLtO3/g4jXeTGTWHkNlkDJilhXM ezkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CYBCVrY685HpPEa6dE7fFTDE7n8ofYv7TLo5Wicy7lE=; b=hjlO6weahgVaAhseNRHitHK7LUVNnhDB56pT2P/wq0dHwxFkWxQD9Jg+hLnpwSVjNP 3Htl6xh0Ui4cZazCgxDvVXBSMUPVahsLzs7TNlJ4MmpKfMfpD+u+G41bDjKWibS4frWS 7ZsM4sCP9spIWiGyjYCnwUQCvcIyEn4MXK/KWt7C7nakmAQmjlJNzI2FXj/Uda7Dqb0l a1Qm5JfOHXiC/jL/43vTWvsqUVWRUfabKAH45Z4OqXGlx8yh3cNGg42ARLjYFMV2qzO4 saIQ/RdIRj8IPE4qAELV/JUUObxVT/4XO2zYBns41gMMROFpXTEpHWty5xne+8ga3y2B 1Hkg== X-Gm-Message-State: AOAM530ekgCMNMzR5Z7rLEO3rpjWing3KR1ydcKeNJSy703rRycoUDV7 QaDsB0e5vRfiQGd7SMsvSz7IaS1p6UU9vg== X-Google-Smtp-Source: ABdhPJwu8RH/sEalbXmMg2n6kOuq8ij3zolDwaNhEx69jNBGXrtTvP+zpgA6E5EROWIB5EMH8aToRQ== X-Received: by 2002:a63:f84c:: with SMTP id v12mr1320550pgj.125.1603140332905; Mon, 19 Oct 2020 13:45:32 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t3sm435790pgm.42.2020.10.19.13.45.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:31 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 03/23] drm/msm/gem: Rename internal get_iova_locked helper Date: Mon, 19 Oct 2020 13:46:04 -0700 Message-Id: <20201019204636.139997-4-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We'll need to introduce a _locked() version of msm_gem_get_iova(), so we need to make that name available. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index afef9c6b1a1c..dec89fe79025 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -376,7 +376,7 @@ put_iova(struct drm_gem_object *obj) } } -static int msm_gem_get_iova_locked(struct drm_gem_object *obj, +static int get_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { @@ -448,7 +448,7 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, msm_gem_lock(obj); - ret = msm_gem_get_iova_locked(obj, aspace, &local, + ret = get_iova_locked(obj, aspace, &local, range_start, range_end); if (!ret) @@ -478,7 +478,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int ret; msm_gem_lock(obj); - ret = msm_gem_get_iova_locked(obj, aspace, iova, 0, U64_MAX); + ret = get_iova_locked(obj, aspace, iova, 0, U64_MAX); msm_gem_unlock(obj); return ret; From patchwork Mon Oct 19 20:46:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9A2CC43457 for ; Mon, 19 Oct 2020 20:47:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 97EE2223FB for ; Mon, 19 Oct 2020 20:47:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TOhsBLCY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728134AbgJSUpj (ORCPT ); Mon, 19 Oct 2020 16:45:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727057AbgJSUpi (ORCPT ); Mon, 19 Oct 2020 16:45:38 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F15EC0613CE; Mon, 19 Oct 2020 13:45:38 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id t4so400565plq.13; Mon, 19 Oct 2020 13:45:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x2ttR3YbRc7ZkDDRV6Xh9q9XbpPFalzSaZjshdY64+Y=; b=TOhsBLCY9vqJS7rDvq+ox90PVZRASxk1goedv3e8Nd68l+ZDpRdVyumNOHcP+6ir2I xJADRcwvNjxA+1etGBCgAVJ6xE9QdRqybv1lbUIwqD3cGTr1lsuOki+y/nU3FAsI1y9+ N9SOo3LfD2iNJ5flL5coj82lDVidbIrtYDBw+gCF3zUsNX4FYb4PcMc+ssSuh3ZbPVhw lEZa7siIamuFHlp/dKKPUn2+lYqlqr0UvnGJoOr/jjnb6j58DQd3dFSJoRXMRnQQ/8PZ SHMIbdx845kvZ9P6ZTJM7DgTFkwvgTf07RlPbA3jxifWaqS+oTVeQaG81Qqn4BNipNwI vquw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x2ttR3YbRc7ZkDDRV6Xh9q9XbpPFalzSaZjshdY64+Y=; b=HlIUVLzRRBpmzYa9s8OlPuyKyD+OiC1xKUbI3CYlN3Vt/4qHMDRZhzjxMSFJEoqgGv OOtlj7O+12cOhVqcoh1Vc9QZxWB+nFHVqpjZw3IJjBOQb+Q95SIcRKjZph+Qbyl41cmR J1FLUFjpH4vRm3QYwYiEGk9+FlmzGdVF0OAAj6M74OcOVEkFByuUsF9Hvunth42w8Mhy TBPnhyRuI7PIJjxD4D47AsD1q/FN8+uLy+To91OqxEN+dDgqMNOShvqjJPxUXej97/0C sBHpHKS16bC/ATySi/0wV01jyPnNZZxyOpB41/YLr+Rfmp9kptePFeYX45niQAJs0CeR m83Q== X-Gm-Message-State: AOAM530S0OUpv+dE8iu5OUUC+TQRsvZTl0VkJeq3qp+rpTBlFR7UfM26 QiNA75Tf+BWfZwkiS0E2X8U= X-Google-Smtp-Source: ABdhPJyYYb3WKJSsrkwwCwh05IhQqWlH0q7cuf6VjnpcgnO8WFbmR6tZO+wFfW5LZkr+BLrU7Qt6eg== X-Received: by 2002:a17:902:9f94:b029:d5:aad0:d18a with SMTP id g20-20020a1709029f94b02900d5aad0d18amr1634421plq.69.1603140338001; Mon, 19 Oct 2020 13:45:38 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id f4sm331129pjs.8.2020.10.19.13.45.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Thomas Zimmermann , Emil Velikov , Sam Ravnborg , Abhinav Kumar , Brian Masney , Christophe JAILLET , Matthias Kaehlcke , Jeffrey Hugo , Harigovindan P , Rajendra Nayak , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH v3 04/23] drm/msm/gem: Move prototypes to msm_gem.h Date: Mon, 19 Oct 2020 13:46:05 -0700 Message-Id: <20201019204636.139997-5-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 1 + drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 1 + drivers/gpu/drm/msm/dsi/dsi_host.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 54 ---------------------- drivers/gpu/drm/msm/msm_fbdev.c | 1 + drivers/gpu/drm/msm/msm_gem.h | 56 +++++++++++++++++++++++ 6 files changed, 60 insertions(+), 54 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index a0253297bc76..b65b2329cc8d 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -11,6 +11,7 @@ #include #include "mdp4_kms.h" +#include "msm_gem.h" struct mdp4_crtc { struct drm_crtc base; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index c39dad151bb6..81fbd52ad7e7 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -15,6 +15,7 @@ #include #include "mdp5_kms.h" +#include "msm_gem.h" #define CURSOR_WIDTH 64 #define CURSOR_HEIGHT 64 diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index cee5c50c8e52..71160b4d77a0 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -26,6 +26,7 @@ #include "sfpb.xml.h" #include "dsi_cfg.h" #include "msm_kms.h" +#include "msm_gem.h" #define DSI_RESET_TOGGLE_DELAY_MS 20 diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 7fbcdaebeff8..713a0ae28125 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -273,28 +273,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, void msm_gem_shrinker_init(struct drm_device *dev); void msm_gem_shrinker_cleanup(struct drm_device *dev); -int msm_gem_mmap_obj(struct drm_gem_object *obj, - struct vm_area_struct *vma); -int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); -vm_fault_t msm_gem_fault(struct vm_fault *vmf); -uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); -int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); -uint64_t msm_gem_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); -struct page **msm_gem_get_pages(struct drm_gem_object *obj); -void msm_gem_put_pages(struct drm_gem_object *obj); -int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, - struct drm_mode_create_dumb *args); -int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, - uint32_t handle, uint64_t *offset); struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); void *msm_gem_prime_vmap(struct drm_gem_object *obj); void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); @@ -303,38 +281,8 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -void *msm_gem_get_vaddr(struct drm_gem_object *obj); -void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); -void msm_gem_put_vaddr(struct drm_gem_object *obj); -int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); -int msm_gem_sync_object(struct drm_gem_object *obj, - struct msm_fence_context *fctx, bool exclusive); -void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); -void msm_gem_active_put(struct drm_gem_object *obj); -int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); -int msm_gem_cpu_fini(struct drm_gem_object *obj); -void msm_gem_free_object(struct drm_gem_object *obj); -int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, - uint32_t size, uint32_t flags, uint32_t *handle, char *name); -struct drm_gem_object *msm_gem_new(struct drm_device *dev, - uint32_t size, uint32_t flags); -struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev, - uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, - struct drm_gem_object **bo, uint64_t *iova); -void *msm_gem_kernel_new_locked(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace, bool locked); -struct drm_gem_object *msm_gem_import(struct drm_device *dev, - struct dma_buf *dmabuf, struct sg_table *sgt); void msm_gem_free_work(struct work_struct *work); -__printf(2, 3) -void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); - int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct msm_gem_address_space *aspace); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, @@ -447,8 +395,6 @@ void __init msm_dpu_register(void); void __exit msm_dpu_unregister(void); #ifdef CONFIG_DEBUG_FS -void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); -void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); int msm_debugfs_late_init(struct drm_device *dev); int msm_rd_debugfs_init(struct drm_minor *minor); diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index 47235f8c5922..678dba1725a6 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -9,6 +9,7 @@ #include #include "msm_drv.h" +#include "msm_gem.h" #include "msm_kms.h" extern int msm_gem_mmap_obj(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f6482154e8bb..fbad08badf43 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,6 +93,62 @@ struct msm_gem_object { }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +int msm_gem_mmap_obj(struct drm_gem_object *obj, + struct vm_area_struct *vma); +int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); +vm_fault_t msm_gem_fault(struct vm_fault *vmf); +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_get_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); +int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); +uint64_t msm_gem_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); +void msm_gem_unpin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); +struct page **msm_gem_get_pages(struct drm_gem_object *obj); +void msm_gem_put_pages(struct drm_gem_object *obj); +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args); +int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, + uint32_t handle, uint64_t *offset); +void *msm_gem_get_vaddr(struct drm_gem_object *obj); +void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); +void msm_gem_put_vaddr(struct drm_gem_object *obj); +int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); +int msm_gem_sync_object(struct drm_gem_object *obj, + struct msm_fence_context *fctx, bool exclusive); +void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); +void msm_gem_active_put(struct drm_gem_object *obj); +int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); +int msm_gem_cpu_fini(struct drm_gem_object *obj); +void msm_gem_free_object(struct drm_gem_object *obj); +int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, + uint32_t size, uint32_t flags, uint32_t *handle, char *name); +struct drm_gem_object *msm_gem_new(struct drm_device *dev, + uint32_t size, uint32_t flags); +struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev, + uint32_t size, uint32_t flags); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, + uint32_t flags, struct msm_gem_address_space *aspace, + struct drm_gem_object **bo, uint64_t *iova); +void *msm_gem_kernel_new_locked(struct drm_device *dev, uint32_t size, + uint32_t flags, struct msm_gem_address_space *aspace, + struct drm_gem_object **bo, uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, + struct msm_gem_address_space *aspace, bool locked); +struct drm_gem_object *msm_gem_import(struct drm_device *dev, + struct dma_buf *dmabuf, struct sg_table *sgt); +__printf(2, 3) +void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); +#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); +#endif + static inline void msm_gem_lock(struct drm_gem_object *obj) { From patchwork Mon Oct 19 20:46:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08BF8C433E7 for ; Mon, 19 Oct 2020 20:45:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1101223FD for ; Mon, 19 Oct 2020 20:45:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hhaZuVDg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728424AbgJSUpo (ORCPT ); Mon, 19 Oct 2020 16:45:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728271AbgJSUpm (ORCPT ); Mon, 19 Oct 2020 16:45:42 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCD81C0613CE; Mon, 19 Oct 2020 13:45:40 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id a1so476500pjd.1; Mon, 19 Oct 2020 13:45:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q5k/qgzllgJinBUFQNgzgdhi3AM/b+GHZAtI8PpVALs=; b=hhaZuVDgZewqLwh816sQVgmzW/gFh9MlKcmEwPJEk5p6fRG0nW+xbnjtUmArbQOph5 ahuuopQ1sUCvXv1Ny/e5R/6wLze4n3SH/GjDVxn+A8F/G31f6MQbyZadec6Po/Fx8fk2 nO1/LR/c4VsbwYc+52WGUMG5gWLm0bWSQYeXHBdBK3EyVlnFTqn5HMeVf7WyPUfzg2+w mNSNe7+lSPIkRl5EnAmh231kz0MglH2v1v6G4L7cxEl5m1cz8VXQH2Yqck97lY+DLXZZ mg+yzWAjpXNqAJ0RRQtZOsxkaljBDdnUitZDYQwEcE4bU97M8Gwe60+5G8NIosflNXOJ uiCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q5k/qgzllgJinBUFQNgzgdhi3AM/b+GHZAtI8PpVALs=; b=JLXjRNX5vofLRKo42w8Bz58fami4fDtHHjRROIcAbfc2NGzwrgrxtkPF4wY3zioRmU NeJwYjFQo+o+y3z3dEOh6PhWJaEQTTwU8Q2UETFpvBBixWxL7or0VeiAtYRf5MHh7l0e /8O2ruqe6zsnWlLZSiWc+uNxXtvSvZLTcOd+8OMd/fNtp34seNaTuvgrXW14FITya51g qV5AxTYeTPHcYo7C+dzYhkcqmsCgnm3toayYuImfuMvUBsER39IYhA5lDuhOmBEAezaP 9bL65y98b6VIx0L8K2fLtpulUSnCNRjfz9VDJwZcnYRGrHKgr4EXA5erXBKDBdg+54RP jEIQ== X-Gm-Message-State: AOAM530/Hf+OQT+u50WdMWwjGJdAYN+LIztUHJi6uCxj0DUC8y68rhhs 9HUiO4Ga67ZKK2ArQ9VO6HE= X-Google-Smtp-Source: ABdhPJykTonBPLogBYK2emaLk9agplCBVDKs3tZs59Jz+4i8pMS3TDYloXQrIfzqD/No0WSERQrDuw== X-Received: by 2002:a17:90a:e60c:: with SMTP id j12mr1142522pjy.27.1603140340296; Mon, 19 Oct 2020 13:45:40 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id w74sm612234pff.200.2020.10.19.13.45.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:39 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 05/23] drm/msm/gem: Add some _locked() helpers Date: Mon, 19 Oct 2020 13:46:06 -0700 Message-Id: <20201019204636.139997-6-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark When we cut-over to using dma_resv_lock/etc instead of msm_obj->lock, we'll need these for the submit path (where resv->lock is already held). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 89 +++++++++++++++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 6 +++ 2 files changed, 75 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index dec89fe79025..e0d8d739b068 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -435,18 +435,14 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, msm_obj->sgt, obj->size >> PAGE_SHIFT); } -/* - * get iova and pin it. Should have a matching put - * limits iova to specified range (in pages) - */ -int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, +static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { u64 local; int ret; - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); ret = get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -457,10 +453,32 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, if (!ret) *iova = local; + return ret; +} + +/* + * get iova and pin it. Should have a matching put + * limits iova to specified range (in pages) + */ +int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova, + u64 range_start, u64 range_end) +{ + int ret; + + msm_gem_lock(obj); + ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); msm_gem_unlock(obj); + return ret; } +int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova) +{ + return get_and_pin_iova_range_locked(obj, aspace, iova, 0, U64_MAX); +} + /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) @@ -501,21 +519,31 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, } /* - * Unpin a iova by updating the reference counts. The memory isn't actually - * purged until something else (shrinker, mm_notifier, destroy, etc) decides - * to get rid of it + * Locked variant of msm_gem_unpin_iova() */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, +void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { struct msm_gem_vma *vma; - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); + vma = lookup_vma(obj, aspace); if (!WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); +} +/* + * Unpin a iova by updating the reference counts. The memory isn't actually + * purged until something else (shrinker, mm_notifier, destroy, etc) decides + * to get rid of it + */ +void msm_gem_unpin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace) +{ + msm_gem_lock(obj); + msm_gem_unpin_iova_locked(obj, aspace); msm_gem_unlock(obj); } @@ -554,15 +582,14 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0; + WARN_ON(!msm_gem_is_locked(obj)); + if (obj->import_attach) return ERR_PTR(-ENODEV); - msm_gem_lock(obj); - if (WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); - msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } @@ -588,20 +615,29 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) } } - msm_gem_unlock(obj); return msm_obj->vaddr; fail: msm_obj->vmap_count--; - msm_gem_unlock(obj); return ERR_PTR(ret); } -void *msm_gem_get_vaddr(struct drm_gem_object *obj) +void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj) { return get_vaddr(obj, MSM_MADV_WILLNEED); } +void *msm_gem_get_vaddr(struct drm_gem_object *obj) +{ + void *ret; + + msm_gem_lock(obj); + ret = msm_gem_get_vaddr_locked(obj); + msm_gem_unlock(obj); + + return ret; +} + /* * Don't use this! It is for the very special case of dumping * submits from GPU hangs or faults, were the bo may already @@ -610,16 +646,29 @@ void *msm_gem_get_vaddr(struct drm_gem_object *obj) */ void *msm_gem_get_vaddr_active(struct drm_gem_object *obj) { - return get_vaddr(obj, __MSM_MADV_PURGED); + void *ret; + + msm_gem_lock(obj); + ret = get_vaddr(obj, __MSM_MADV_PURGED); + msm_gem_unlock(obj); + + return ret; } -void msm_gem_put_vaddr(struct drm_gem_object *obj) +void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(msm_obj->vmap_count < 1); + msm_obj->vmap_count--; +} + +void msm_gem_put_vaddr(struct drm_gem_object *obj) +{ + msm_gem_lock(obj); + msm_gem_put_vaddr_locked(obj); msm_gem_unlock(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index fbad08badf43..d55d5401a2d2 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -103,10 +103,14 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); uint64_t msm_gem_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); +void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); struct page **msm_gem_get_pages(struct drm_gem_object *obj); @@ -115,8 +119,10 @@ int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args); int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, uint32_t handle, uint64_t *offset); +void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj); void *msm_gem_get_vaddr(struct drm_gem_object *obj); void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); +void msm_gem_put_vaddr_locked(struct drm_gem_object *obj); void msm_gem_put_vaddr(struct drm_gem_object *obj); int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); int msm_gem_sync_object(struct drm_gem_object *obj, From patchwork Mon Oct 19 20:46:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FC75C43467 for ; Mon, 19 Oct 2020 20:45:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 392C3223FD for ; Mon, 19 Oct 2020 20:45:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cPdWqqA+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728452AbgJSUpo (ORCPT ); Mon, 19 Oct 2020 16:45:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728378AbgJSUpn (ORCPT ); Mon, 19 Oct 2020 16:45:43 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59E8EC0613CE; Mon, 19 Oct 2020 13:45:43 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id b19so439848pld.0; Mon, 19 Oct 2020 13:45:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=heNnLATWWWZ7SExL62Bdg0BvXOx79+g5dVu5lFw7nAE=; b=cPdWqqA+q9J617UXy1CtehpGWRzBmdxD9pwj+TufvgDL2pCVBVuXdytjYDdayd9d56 /OD/dS1jTghH0XVEFFCVo5gNhHbUdh1b1V9d4Uof7RwPWeku7jaDeGKAWStQeiw1VhQH pMBD9vRXMcUTL9UNb+xUXh/g/EpzNlDJhMx7E9JNEwPQKdHvRDMf47qZvWlAaOJuFcvH a93V+1RguqjDKuzxYAGFIrGQGp6gqCK7O+6g8/5JYq57nZWBdTdPQQOdOt9E0D40YjFH x3xkW4BcRd4TdvKggd1rQTdjSK4LXIkLSLNwDOOyscWstlPlWXmIeDfI/0YcNDbfCNmj zDSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=heNnLATWWWZ7SExL62Bdg0BvXOx79+g5dVu5lFw7nAE=; b=ttSMBhBsMXyhHHLpbspkV2MyyW/EUXYnPOhDgmbAZR8SoLeNSft3Rrm7bCB1snl+pk DxsEHKI6qBw517OJKSbPSSIdoh5jH5ysr6M3tekvpb53dPVJ0fxkTCWRXPy4+S0hKQvi jBALbnnZT6HkwxeE3DyFJ77vv2cSu9ti7NcenUehC5x2JZqt5GLCu+ppkbwA9Hpp/xJI 6iF1KADS75hKr4NFkZeS8OSLSTujwe0jjdZYo/bxatmfwsC/z6LUZiC5+e+kVUY1TLXT Hi1tDVQ5DePO0ni4EGwAFaH60gidId1w59uB1rSjgFwacFRfEAXhWgYOrpIIOed4KRTm wJxg== X-Gm-Message-State: AOAM532ZnOHvKWT935OSsqvHbLHXyuGTyEOUfqS/E7DUJ25d7wCh5h41 5PtBHPJq6IEVCPDZdhOmV+Y= X-Google-Smtp-Source: ABdhPJxKfRVbe06HVzfhNhHxlfx+9V1bH7przPUm2ipBmERFnvcD/Zo2UFNehmDliVI8xcU24nOpHw== X-Received: by 2002:a17:90a:8c17:: with SMTP id a23mr1203055pjo.112.1603140342852; Mon, 19 Oct 2020 13:45:42 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id az18sm332262pjb.35.2020.10.19.13.45.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:41 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 06/23] drm/msm/gem: Move locking in shrinker path Date: Mon, 19 Oct 2020 13:46:07 -0700 Message-Id: <20201019204636.139997-7-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to skip over bo's that are already locked. This gets rid of the nested lock classes. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 24 +++++---------------- drivers/gpu/drm/msm/msm_gem.h | 29 ++++++++++---------------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 27 +++++++++++++++++------- 3 files changed, 35 insertions(+), 45 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e0d8d739b068..1195847714ba 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,8 +17,6 @@ #include "msm_gpu.h" #include "msm_mmu.h" -static void msm_gem_vunmap_locked(struct drm_gem_object *obj); - static dma_addr_t physaddr(struct drm_gem_object *obj) { @@ -693,20 +691,19 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) return (madv != __MSM_MADV_PURGED); } -void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) +void msm_gem_purge(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); - mutex_lock_nested(&msm_obj->lock, subclass); - put_iova(obj); - msm_gem_vunmap_locked(obj); + msm_gem_vunmap(obj); put_pages(obj); @@ -724,11 +721,9 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); - - msm_gem_unlock(obj); } -static void msm_gem_vunmap_locked(struct drm_gem_object *obj) +void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -741,15 +736,6 @@ static void msm_gem_vunmap_locked(struct drm_gem_object *obj) msm_obj->vaddr = NULL; } -void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - - mutex_lock_nested(&msm_obj->lock, subclass); - msm_gem_vunmap_locked(obj); - msm_gem_unlock(obj); -} - /* must be called before _move_to_active().. */ int msm_gem_sync_object(struct drm_gem_object *obj, struct msm_fence_context *fctx, bool exclusive) @@ -986,7 +972,7 @@ static void free_object(struct msm_gem_object *msm_obj) drm_prime_gem_destroy(obj, msm_obj->sgt); } else { - msm_gem_vunmap_locked(obj); + msm_gem_vunmap(obj); put_pages(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d55d5401a2d2..c5232b8da794 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -162,6 +162,13 @@ msm_gem_lock(struct drm_gem_object *obj) mutex_lock(&msm_obj->lock); } +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_trylock_recursive(&msm_obj->lock) == MUTEX_TRYLOCK_SUCCESS; +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { @@ -190,6 +197,7 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { + WARN_ON(!msm_gem_is_locked(&msm_obj->base)); WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; @@ -197,27 +205,12 @@ static inline bool is_purgeable(struct msm_gem_object *msm_obj) static inline bool is_vunmapable(struct msm_gem_object *msm_obj) { + WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return (msm_obj->vmap_count == 0) && msm_obj->vaddr; } -/* The shrinker can be triggered while we hold objA->lock, and need - * to grab objB->lock to purge it. Lockdep just sees these as a single - * class of lock, so we use subclasses to teach it the difference. - * - * OBJ_LOCK_NORMAL is implicit (ie. normal mutex_lock() call), and - * OBJ_LOCK_SHRINKER is used by shrinker. - * - * It is *essential* that we never go down paths that could trigger the - * shrinker for a purgable object. This is ensured by checking that - * msm_obj->madv == MSM_MADV_WILLNEED. - */ -enum msm_gem_lock { - OBJ_LOCK_NORMAL, - OBJ_LOCK_SHRINKER, -}; - -void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass); -void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass); +void msm_gem_purge(struct drm_gem_object *obj); +void msm_gem_vunmap(struct drm_gem_object *obj); void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 482576d7a39a..2dc0ffa925b4 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -52,8 +52,11 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return 0; list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_purgeable(msm_obj)) count += msm_obj->base.size >> PAGE_SHIFT; + msm_gem_unlock(&msm_obj->base); } if (unlock) @@ -78,10 +81,13 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_purgeable(msm_obj)) { - msm_gem_purge(&msm_obj->base, OBJ_LOCK_SHRINKER); + msm_gem_purge(&msm_obj->base); freed += msm_obj->base.size >> PAGE_SHIFT; } + msm_gem_unlock(&msm_obj->base); } if (unlock) @@ -107,15 +113,20 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) return NOTIFY_DONE; list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_vunmapable(msm_obj)) { - msm_gem_vunmap(&msm_obj->base, OBJ_LOCK_SHRINKER); - /* since we don't know any better, lets bail after a few - * and if necessary the shrinker will be invoked again. - * Seems better than unmapping *everything* - */ - if (++unmapped >= 15) - break; + msm_gem_vunmap(&msm_obj->base); + unmapped++; } + msm_gem_unlock(&msm_obj->base); + + /* since we don't know any better, lets bail after a few + * and if necessary the shrinker will be invoked again. + * Seems better than unmapping *everything* + */ + if (++unmapped >= 15) + break; } if (unlock) From patchwork Mon Oct 19 20:46:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 962FDC433DF for ; Mon, 19 Oct 2020 20:47:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B4E8223FB for ; Mon, 19 Oct 2020 20:47:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sv6KED9V" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728838AbgJSUrC (ORCPT ); Mon, 19 Oct 2020 16:47:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728514AbgJSUpp (ORCPT ); Mon, 19 Oct 2020 16:45:45 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9269CC0613CE; Mon, 19 Oct 2020 13:45:45 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id x13so682810pgp.7; Mon, 19 Oct 2020 13:45:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YsmBQA+C1bDyQWIRHo7QvKc1r4T12MZo6bgQ7Za9044=; b=sv6KED9V3jSoDw9wqZbup/+wgDtauftLvDixRUmyQvz5CDRvRo7PqbqXViq8xhqezW JVwWQRJBZT873Pa73NmcwNfz7+5wS7fxg+wOoqX6m5JE0jqzJnTK9uPvrAgwQYSfKMmG nbj0hZTDLZWA5YWleFB+ERunnFzjxId9DNeNy52Kc7K6NrkaFvKN6xYU9pbwNwQ4RlkJ RnQ3XRyJtyXI/vV8uZGIoKTIT9oTOlEeqR/0MDEBqSLauG2RdbBtQPy25j+CxBpa2vUd a+RvjruBRflX3JQTfyCU4fMOfDl7SHEqPgpPYZeN+x6XaG1onzqP1fyrLlt6S3Ev6llM 6QYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YsmBQA+C1bDyQWIRHo7QvKc1r4T12MZo6bgQ7Za9044=; b=M5zBEalfWgd0/CBmwEWG/2slFZE1If+5qC23cNOp0/09GLqIm4w1htplRnl6hdRAIH IuhhIAua0xLFbk0FEFXWwgWdoObxd4Wf64VkQ96JNRj+UFDqVjZ82XncFIfOsq69fPnl Lr8kujfV6WGeFLEJXsDqDG5B/ZZ69xB4xgYEp2M5Cl9GVCmKLYq+GZVRAAVU4h4h7Zm+ hREX/+QXiHJJQoWpdFzl3DP+HNntNDJfSqbYQ0u6SRbYcYsGUkFWbre4oqqYwRyKyzYk 12XQ0MSMtSh+peljvFFChRAnPkz0GWHvfRKjVc77lO1DaJ6p34zwCYAS5iQpI4xo4qKP zxug== X-Gm-Message-State: AOAM531V17sddbfd9IN7MmjHHrrHd1riNnShK2fGoccT7W+SmUv6eQnA rOhmX8Dk8Qc0S8dWNJXyJtU= X-Google-Smtp-Source: ABdhPJwgXc9rO0P4ZP6AoZJPC3QQJ23nqx0R76qLqitHrUQ8/95otdi+x9H6JaCA08EmPMSo6JrcLQ== X-Received: by 2002:a63:d117:: with SMTP id k23mr1341157pgg.176.1603140345047; Mon, 19 Oct 2020 13:45:45 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id u69sm678305pfc.27.2020.10.19.13.45.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:43 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 07/23] drm/msm/submit: Move copy_from_user ahead of locking bos Date: Mon, 19 Oct 2020 13:46:08 -0700 Message-Id: <20201019204636.139997-8-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We cannot switch to using obj->resv for locking without first moving all the copy_from_user() ahead of submit_lock_objects(). Otherwise in the mm fault path we aquire mm->mmap_sem before obj lock, but in the submit path the order is reversed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 3 + drivers/gpu/drm/msm/msm_gem_submit.c | 121 ++++++++++++++++----------- 2 files changed, 76 insertions(+), 48 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c5232b8da794..0b7dda312992 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -240,7 +240,10 @@ struct msm_gem_submit { uint32_t type; uint32_t size; /* in dwords */ uint64_t iova; + uint32_t offset;/* in dwords */ uint32_t idx; /* cmdstream buffer idx in bos[] */ + uint32_t nr_relocs; + struct drm_msm_gem_submit_reloc *relocs; } *cmd; /* array of size nr_cmds */ struct { uint32_t flags; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index aa5c60a7132d..002130d826aa 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -62,11 +62,16 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, void msm_gem_submit_free(struct msm_gem_submit *submit) { + unsigned i; + dma_fence_put(submit->fence); list_del(&submit->node); put_pid(submit->pid); msm_submitqueue_put(submit->queue); + for (i = 0; i < submit->nr_cmds; i++) + kfree(submit->cmd[i].relocs); + kfree(submit); } @@ -150,6 +155,60 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, return ret; } +static int submit_lookup_cmds(struct msm_gem_submit *submit, + struct drm_msm_gem_submit *args, struct drm_file *file) +{ + unsigned i, sz; + int ret = 0; + + for (i = 0; i < args->nr_cmds; i++) { + struct drm_msm_gem_submit_cmd submit_cmd; + void __user *userptr = + u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); + + ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); + if (ret) { + ret = -EFAULT; + goto out; + } + + /* validate input from userspace: */ + switch (submit_cmd.type) { + case MSM_SUBMIT_CMD_BUF: + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + break; + default: + DRM_ERROR("invalid type: %08x\n", submit_cmd.type); + return -EINVAL; + } + + if (submit_cmd.size % 4) { + DRM_ERROR("non-aligned cmdstream buffer size: %u\n", + submit_cmd.size); + ret = -EINVAL; + goto out; + } + + submit->cmd[i].type = submit_cmd.type; + submit->cmd[i].size = submit_cmd.size / 4; + submit->cmd[i].offset = submit_cmd.submit_offset / 4; + submit->cmd[i].idx = submit_cmd.submit_idx; + submit->cmd[i].nr_relocs = submit_cmd.nr_relocs; + + sz = sizeof(struct drm_msm_gem_submit_reloc) * submit_cmd.nr_relocs; + submit->cmd[i].relocs = kmalloc(sz, GFP_KERNEL); + ret = copy_from_user(submit->cmd[i].relocs, userptr, sz); + if (ret) { + ret = -EFAULT; + goto out; + } + } + +out: + return ret; +} + static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i, bool backoff) { @@ -301,7 +360,7 @@ static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, /* process the reloc's and patch up the cmdstream as needed: */ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *obj, - uint32_t offset, uint32_t nr_relocs, uint64_t relocs) + uint32_t offset, uint32_t nr_relocs, struct drm_msm_gem_submit_reloc *relocs) { uint32_t i, last_offset = 0; uint32_t *ptr; @@ -327,18 +386,11 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob } for (i = 0; i < nr_relocs; i++) { - struct drm_msm_gem_submit_reloc submit_reloc; - void __user *userptr = - u64_to_user_ptr(relocs + (i * sizeof(submit_reloc))); + struct drm_msm_gem_submit_reloc submit_reloc = relocs[i]; uint32_t off; uint64_t iova; bool valid; - if (copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc))) { - ret = -EFAULT; - goto out; - } - if (submit_reloc.submit_offset % 4) { DRM_ERROR("non-aligned reloc offset: %u\n", submit_reloc.submit_offset); @@ -694,6 +746,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; + ret = submit_lookup_cmds(submit, args, file); + if (ret) + goto out; + /* copy_*_user while holding a ww ticket upsets lockdep */ ww_acquire_init(&submit->ticket, &reservation_ww_class); has_ww_ticket = true; @@ -710,60 +766,29 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; for (i = 0; i < args->nr_cmds; i++) { - struct drm_msm_gem_submit_cmd submit_cmd; - void __user *userptr = - u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); struct msm_gem_object *msm_obj; uint64_t iova; - ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); - if (ret) { - ret = -EFAULT; - goto out; - } - - /* validate input from userspace: */ - switch (submit_cmd.type) { - case MSM_SUBMIT_CMD_BUF: - case MSM_SUBMIT_CMD_IB_TARGET_BUF: - case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - break; - default: - DRM_ERROR("invalid type: %08x\n", submit_cmd.type); - ret = -EINVAL; - goto out; - } - - ret = submit_bo(submit, submit_cmd.submit_idx, + ret = submit_bo(submit, submit->cmd[i].idx, &msm_obj, &iova, NULL); if (ret) goto out; - if (submit_cmd.size % 4) { - DRM_ERROR("non-aligned cmdstream buffer size: %u\n", - submit_cmd.size); + if (!submit->cmd[i].size || + ((submit->cmd[i].size + submit->cmd[i].offset) > + msm_obj->base.size / 4)) { + DRM_ERROR("invalid cmdstream size: %u\n", submit->cmd[i].size * 4); ret = -EINVAL; goto out; } - if (!submit_cmd.size || - ((submit_cmd.size + submit_cmd.submit_offset) > - msm_obj->base.size)) { - DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); - ret = -EINVAL; - goto out; - } - - submit->cmd[i].type = submit_cmd.type; - submit->cmd[i].size = submit_cmd.size / 4; - submit->cmd[i].iova = iova + submit_cmd.submit_offset; - submit->cmd[i].idx = submit_cmd.submit_idx; + submit->cmd[i].iova = iova + (submit->cmd[i].offset * 4); if (submit->valid) continue; - ret = submit_reloc(submit, msm_obj, submit_cmd.submit_offset, - submit_cmd.nr_relocs, submit_cmd.relocs); + ret = submit_reloc(submit, msm_obj, submit->cmd[i].offset * 4, + submit->cmd[i].nr_relocs, submit->cmd[i].relocs); if (ret) goto out; } From patchwork Mon Oct 19 20:46:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8A6DC43457 for ; Mon, 19 Oct 2020 20:45:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6617F223FB for ; Mon, 19 Oct 2020 20:45:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bmzm4rvd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728648AbgJSUpt (ORCPT ); Mon, 19 Oct 2020 16:45:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728573AbgJSUpr (ORCPT ); Mon, 19 Oct 2020 16:45:47 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCCD6C0613CE; Mon, 19 Oct 2020 13:45:47 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id e10so694508pfj.1; Mon, 19 Oct 2020 13:45:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t6k0W0LUpGL7/GToLy9vCWhiEXYilH3isH63u8Yx9dc=; b=bmzm4rvdVc0FLaxQTSGAk/qiwOyjhcoupgB5tSrmQrQFujkxbkIK3EjvXzLGowRROd rhuvPjTy7LFIpDHISKdvy+55RbO+w1ElfMOsxlWMBsGRG29YZ4JNY+zp+OUf+PqAKs4h Q1DJC1J6Y0VvtjFmzHKCveP0KQaiDUSgEu7//u+y6meSBNQjm29aadXUo3D7KV4loxS8 Ucn13vLLgGyJa7L8NZVF7Zd/maxQclUhbLswskMRSEgBlAXIs1abXkdh9hY4v4QKZEP3 fNVvhj6gsriIAofLRLx2SOcQ+ARdEtaHHXXt5fB7nQ7uF11GiTmOIEgA6gkLmytlBmVP Friw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t6k0W0LUpGL7/GToLy9vCWhiEXYilH3isH63u8Yx9dc=; b=OVG1tN6Xz/LAeftgT/lfTgA90rhDdnlidN0l0w/sBq/Fd134F+/pcmJ/XetYqoud+d U5ZEPZqpfR47ca3CoT50wBjKZTVApxQKXBWCh5Qddz10FbJnC+Mq/ifHiI4VB420ueG4 WfbQ6sg982N2QfaGKco7o03irAOcM2NwK9Y45asRi4E/4IuMM+si2uvM731OLSvEu0ei c9g4mrrPviTMDOkPhJbCzMhDlUsWaioQt88GoqwI1+rxCgNjv3uW7TcSL/G82WSH9AzY mW8kLOfydXHznalJev5j7LKZpuIMf/e+YsBvTH4t5Bd/QjcIpcsDa3zKyyQlQY+Y6gk9 fCIQ== X-Gm-Message-State: AOAM532NKxGurqXU75NZ0175NzjXrOY+hTxTmEO0yLE83TxbdvImJM3/ Pt+mmmL5aGcfGx2Y35tzNs4= X-Google-Smtp-Source: ABdhPJzhXNzkAEZLMxJ0jIMcOrqAtXTdCMiRioH5V6wLNqieqHhykmoBwyeBDpqxkFSN38NHpriFYg== X-Received: by 2002:a62:1806:0:b029:152:4311:a19d with SMTP id 6-20020a6218060000b02901524311a19dmr1468674pfy.20.1603140347391; Mon, 19 Oct 2020 13:45:47 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id n5sm627341pfq.46.2020.10.19.13.45.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:46 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 08/23] drm/msm: Do rpm get sooner in the submit path Date: Mon, 19 Oct 2020 13:46:09 -0700 Message-Id: <20201019204636.139997-9-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Unfortunately, due to an dev_pm_opp locking interaction with mm->mmap_sem, we need to do pm get before aquiring obj locks, otherwise we can have anger lockdep with the chain: opp_table_lock --> &mm->mmap_sem --> reservation_ww_class_mutex For an explicit fencing userspace, the impact should be minimal as we do all the fence waits before this point. It could result in some needless resumes in error cases, etc. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 002130d826aa..a9422d043bfe 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -744,11 +744,20 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, ret = submit_lookup_objects(submit, args, file); if (ret) - goto out; + goto out_pre_pm; ret = submit_lookup_cmds(submit, args, file); if (ret) - goto out; + goto out_pre_pm; + + /* + * Thanks to dev_pm_opp opp_table_lock interactions with mm->mmap_sem + * in the resume path, we need to to rpm get before we lock objs. + * Which unfortunately might involve powering up the GPU sooner than + * is necessary. But at least in the explicit fencing case, we will + * have already done all the fence waiting. + */ + pm_runtime_get_sync(&gpu->pdev->dev); /* copy_*_user while holding a ww ticket upsets lockdep */ ww_acquire_init(&submit->ticket, &reservation_ww_class); @@ -825,6 +834,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, out: + pm_runtime_put(&gpu->pdev->dev); +out_pre_pm: submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); From patchwork Mon Oct 19 20:46:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FE40C433E7 for ; Mon, 19 Oct 2020 20:47:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F2F7223FD for ; Mon, 19 Oct 2020 20:47:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qi3EEB0r" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727095AbgJSUrB (ORCPT ); Mon, 19 Oct 2020 16:47:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728707AbgJSUpu (ORCPT ); Mon, 19 Oct 2020 16:45:50 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 997BCC0613CE; Mon, 19 Oct 2020 13:45:50 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id gv6so464478pjb.4; Mon, 19 Oct 2020 13:45:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E6HwhZIRiDDg4MAhg4l7o5fOiIxWG5h//YWiYG/NHwI=; b=Qi3EEB0rT7iBNy3/EeF7SHg3FV9UI9fZwTdvjf9WASTT+XvP56VXlknaxN61aD8CPG mW46TEHp8vZ/BZ2OpAryj4dUw4/Rfei2z+hx1G3Xe4tdnALL4MpqUhtzl0ZWCT0Pgy8B nn1pSCwDqrBp0eS1pPYQNi46f4hs73Jc4kOFDWC0uw7n8jD66lUPd87fJSh/WOARrH4P iLjAr8FG/hBFbz+qeLHXPzH8+oCgdD16eHvwyzDM6/50K4ZBqOm9ELqhkidcndz6MhTs s2RzVonjMgwadVsb6XcExH839085/+E8+uKVdjgGgDhBC9l1TxvuUHfK4yd0MLao8mvm 5UUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E6HwhZIRiDDg4MAhg4l7o5fOiIxWG5h//YWiYG/NHwI=; b=WHAHRnAGDwqnwGWvvjTYoB1qxGGvmuwFnUEAnSRGIWfoyobHzQA5TKtqaDTfezfQfV gagEd1UkAwl8XmUeEO4zpPWC4dyW8z+pO/0HE4E5X297XwrNam5JqnEdCUAYRo6Xgbwb 0Q7S2/AvVQlX5tMuo84fQdxNNuk+Kp0OkY6wTQnHvGGJvt0UN2Cx9Mrn9lGqQ6Fn3Sk4 58hcXR/MTppJ38EPSCPdRJEXFVQmwBBKRNH2kCAhue4n4IRwbJPwt06A3scLDl9CmrYz A01lcGn5Q+1fDxPZu2O4B9e9f3aBCb7x98M7jO0brOBjJ+1b36fdkjsIzDjMyY6Ldo+Y 0Xow== X-Gm-Message-State: AOAM5338DtisfoM1+wW2SzbnFnxC4kSLWCmdWXJ39/lldWc42iL3x2E0 HBQ6Mj+dKw91tZHR4aQVRBk= X-Google-Smtp-Source: ABdhPJx8MFIA2RSrbfU9YKZCefXDzRsQoLGiHC1GD4EHQ+7wMBfcZ/2foeAPDmlYD5TXsqccpVOUhA== X-Received: by 2002:a17:902:b68d:b029:d3:e6e4:3d99 with SMTP id c13-20020a170902b68db02900d3e6e43d99mr1672814pls.62.1603140350092; Mon, 19 Oct 2020 13:45:50 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id z16sm611183pfr.116.2020.10.19.13.45.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:48 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 09/23] drm/msm/gem: Switch over to obj->resv for locking Date: Mon, 19 Oct 2020 13:46:10 -0700 Message-Id: <20201019204636.139997-10-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This also converts the special msm_gem_get_vaddr_active() to expect the lock to already be held. There are two call-sites for this, one already has the lock held, so it is more straightforward to just open-code the locking for the other caller. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 12 ++---------- drivers/gpu/drm/msm/msm_gem.h | 16 +++++----------- drivers/gpu/drm/msm/msm_gem_submit.c | 8 ++++---- drivers/gpu/drm/msm/msm_gpu.c | 14 ++++++++++++-- drivers/gpu/drm/msm/msm_rd.c | 2 +- 5 files changed, 24 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 1195847714ba..6abcf9fe480d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -644,13 +644,7 @@ void *msm_gem_get_vaddr(struct drm_gem_object *obj) */ void *msm_gem_get_vaddr_active(struct drm_gem_object *obj) { - void *ret; - - msm_gem_lock(obj); - ret = get_vaddr(obj, __MSM_MADV_PURGED); - msm_gem_unlock(obj); - - return ret; + return get_vaddr(obj, __MSM_MADV_PURGED); } void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) @@ -976,9 +970,9 @@ static void free_object(struct msm_gem_object *msm_obj) put_pages(obj); } + msm_gem_unlock(obj); drm_gem_object_release(obj); - msm_gem_unlock(obj); kfree(msm_obj); } @@ -1050,8 +1044,6 @@ static int msm_gem_new_impl(struct drm_device *dev, if (!msm_obj) return -ENOMEM; - mutex_init(&msm_obj->lock); - msm_obj->flags = flags; msm_obj->madv = MSM_MADV_WILLNEED; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 0b7dda312992..f0608d96ef03 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -85,7 +85,6 @@ struct msm_gem_object { * an IOMMU. Also used for stolen/splashscreen buffer. */ struct drm_mm_node *vram_node; - struct mutex lock; /* Protects resources associated with bo */ char name[32]; /* Identifier to print for the debugfs files */ @@ -158,36 +157,31 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); static inline void msm_gem_lock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + dma_resv_lock(obj->resv, NULL); } static inline bool __must_check msm_gem_trylock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_trylock_recursive(&msm_obj->lock) == MUTEX_TRYLOCK_SUCCESS; + return dma_resv_trylock(obj->resv); } static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_lock_interruptible(&msm_obj->lock); + return dma_resv_lock_interruptible(obj->resv, NULL); } static inline void msm_gem_unlock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_unlock(&msm_obj->lock); + dma_resv_unlock(obj->resv); } static inline bool msm_gem_is_locked(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_is_locked(&msm_obj->lock); + return dma_resv_is_locked(obj->resv); } static inline bool is_active(struct msm_gem_object *msm_obj) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a9422d043bfe..50ecc8455197 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -215,7 +215,7 @@ static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, struct msm_gem_object *msm_obj = submit->bos[i].obj; if (submit->bos[i].flags & BO_PINNED) - msm_gem_unpin_iova(&msm_obj->base, submit->aspace); + msm_gem_unpin_iova_locked(&msm_obj->base, submit->aspace); if (submit->bos[i].flags & BO_LOCKED) dma_resv_unlock(msm_obj->base.resv); @@ -318,7 +318,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) uint64_t iova; /* if locking succeeded, pin bo: */ - ret = msm_gem_get_and_pin_iova(&msm_obj->base, + ret = msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); if (ret) @@ -377,7 +377,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob /* For now, just map the entire thing. Eventually we probably * to do it page-by-page, w/ kmap() if not vmap()d.. */ - ptr = msm_gem_get_vaddr(&obj->base); + ptr = msm_gem_get_vaddr_locked(&obj->base); if (IS_ERR(ptr)) { ret = PTR_ERR(ptr); @@ -428,7 +428,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob } out: - msm_gem_put_vaddr(&obj->base); + msm_gem_put_vaddr_locked(&obj->base); return ret; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 55d16489d0f3..015f6b884e2e 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -326,7 +326,9 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (!state_bo->data) goto out; + msm_gem_lock(&obj->base); ptr = msm_gem_get_vaddr_active(&obj->base); + msm_gem_unlock(&obj->base); if (IS_ERR(ptr)) { kvfree(state_bo->data); state_bo->data = NULL; @@ -470,14 +472,22 @@ static void recover_worker(struct work_struct *work) put_task_struct(task); } + /* msm_rd_dump_submit() needs bo locked to dump: */ + for (i = 0; i < submit->nr_bos; i++) + msm_gem_lock(&submit->bos[i].obj->base); + if (comm && cmd) { DRM_DEV_ERROR(dev->dev, "%s: offending task: %s (%s)\n", gpu->name, comm, cmd); msm_rd_dump_submit(priv->hangrd, submit, "offending task: %s (%s)", comm, cmd); - } else + } else { msm_rd_dump_submit(priv->hangrd, submit, NULL); + } + + for (i = 0; i < submit->nr_bos; i++) + msm_gem_unlock(&submit->bos[i].obj->base); } /* Record the crash state */ @@ -784,7 +794,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); - msm_gem_get_and_pin_iova(&msm_obj->base, submit->aspace, &iova); + msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) dma_resv_add_excl_fence(drm_obj->resv, submit->fence); diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index fea30e7aa9e8..659e5cc4b40a 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -333,7 +333,7 @@ static void snapshot_buf(struct msm_rd_state *rd, rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size); - msm_gem_put_vaddr(&obj->base); + msm_gem_put_vaddr_locked(&obj->base); } /* called under struct_mutex */ From patchwork Mon Oct 19 20:46:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4326FC43457 for ; Mon, 19 Oct 2020 20:45:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDE0622409 for ; Mon, 19 Oct 2020 20:45:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ry6AqElE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728869AbgJSUp4 (ORCPT ); Mon, 19 Oct 2020 16:45:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728770AbgJSUpx (ORCPT ); Mon, 19 Oct 2020 16:45:53 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D9C5C0613D0; Mon, 19 Oct 2020 13:45:52 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id gm14so427965pjb.2; Mon, 19 Oct 2020 13:45:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EJxa0Y7nCs3Tf2gPtGPuKCLZyL3rYvU42pja/POPnYw=; b=ry6AqElEvEpY7ozJUzanUkr157xTcDW84vk1uqXe5KioXVcwLPw5jalHV1s0pWC2Af rriDWA/Zu7wUv5gHyqX9eXg6WRCK5RElbGpmxCRVi2vW57GeJQ3SMeFO2qMYj8FcxZLR o0m+6eFDETpO+n1tIu1lrgv3dtNTu5x5cfpY9GrMlEPs9+TSVIA42/UEQI43UfmpjrkV 2Hw9XRE0TnlkXbC7x6+JI4L4SaY7eFt3JPDy6pjBUA59rPSTjzxjkfn45iZheO+MtECk DV2wknIBeq28KtNVQnc7YJv1N8kDWSLw35D3mSqi7pPHLc4LVdKiwVVQkNqG8KFcc1t5 HYHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EJxa0Y7nCs3Tf2gPtGPuKCLZyL3rYvU42pja/POPnYw=; b=NGqHx2GgsKCJv9LfwaLhpWQUURMqAqZxaUY9epKrswC3sMSU+Zy/2IzmqCZ8g1ZjD/ J/aiTeZ6zJY5toQe6MVF/UBar0ikdLgndhSw/DK5IzaZQ9h4t+rkaXVCtLKAF85qfqUj R99Z6H6kq1/yqCnnImjSkB6K8rlbqD2p15HCi/ga/pGyEyGfPAWK0qFIHivPTvrtKcS0 y7mykp2sVyaXrrDLai6vGvUVcSg1hMq8zbT+pveZzDh0hZgv3SYFOACKmoPWdN4Zldwm U5RZcT8fPyg+UnR/8oO5kd1g6JlE/5GsotsJ+0je8JtA6X5edV1SNUZKAe0qphaoEnJi b4KA== X-Gm-Message-State: AOAM531SxYSz4O+w+MH5jA2vbA1VztEhusqwHYYSz2At979rqrGGhgAg SfPoO21A1tCsSmwMBZUHd0+9cl23ouuu3g== X-Google-Smtp-Source: ABdhPJwU2P1f6KBpJt/desSFYsQHS6iPfuSxa2BxK+skgo0A6H4N08swifB4YFHe+Ekvo2BsbH5YPw== X-Received: by 2002:a17:90a:d584:: with SMTP id v4mr1272342pju.194.1603140352056; Mon, 19 Oct 2020 13:45:52 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q13sm660996pfg.3.2020.10.19.13.45.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:51 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 10/23] drm/msm: Use correct drm_gem_object_put() in fail case Date: Mon, 19 Oct 2020 13:46:11 -0700 Message-Id: <20201019204636.139997-11-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We only want to use the _unlocked() variant in the unlocked case. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 6abcf9fe480d..3dcb2ef4740f 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -1135,7 +1135,11 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, return obj; fail: - drm_gem_object_put(obj); + if (struct_mutex_locked) { + drm_gem_object_put_locked(obj); + } else { + drm_gem_object_put(obj); + } return ERR_PTR(ret); } From patchwork Mon Oct 19 20:46:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FB96C43457 for ; Mon, 19 Oct 2020 20:47:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB49222400 for ; Mon, 19 Oct 2020 20:47:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LBZq9e80" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726779AbgJSUq5 (ORCPT ); Mon, 19 Oct 2020 16:46:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728838AbgJSUpz (ORCPT ); Mon, 19 Oct 2020 16:45:55 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 755E2C0613CE; Mon, 19 Oct 2020 13:45:54 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id t22so412231plr.9; Mon, 19 Oct 2020 13:45:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mM34E3X4vpoNxytPQSc+/o1jOvHyhw217+EG77sMv+c=; b=LBZq9e80i2kTt0UvwPxVRaQHTkxAHZL9msB0CkdkuL0IWIzAfxLCVskEC3TSNFtQvb lvuYYwdqrx+QaqBkViz2A25krnPpFxx7jbESi6FxAPo9AHhgK6jGJIaGWQffDGA58IoP bdtMy6UQDJqCJ/dDaJ5xwDq4UlaVu37cU5Pb3RdlSJirtvAptbf24+W6AbC+aN7Y432E K9tqD73KVYVETz5RCvkCZvOMb9y3Hzpj03sZeV6iyoVHtF6YoHcw0tGX8bZ1pYsT5V79 DbMs0Rfj6y3eLMDDQjkTRna8X72vhpNiRQwgyDntt3OgIjEOlo/MuSeC+KUk7454Fk9m pafw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mM34E3X4vpoNxytPQSc+/o1jOvHyhw217+EG77sMv+c=; b=V+Qg9/X8ikRjWCLkGy70thH/SfosJaiNbF0xlqt707+lC9m4YI64Zdb51rruVKHK/V 11cm7BpNHybVBiXVpzDxX8g8ClSLAwh4tb/cDWNOqAVR23EkZYYePjPcJzy+cOqhxk7Y f2KN8TeNBTp0nULYXmulOS98X5Dbux0vRy7euKGIJbT8Lqyl7TEZbWwNPxz+ImHzRAFw 4PFPzxd7JSVc9JysdDbJ4ExWrcbIkYCAxOUO6iro8efZYUrDV9EQUEvwQA21S8rV/veR osJZxSLdTfZrwENeW4jOxXvZnGxMLZZV96IvfzHAhkr7LfwKt2Lfr9gfrBlVooU1OyLq GrXw== X-Gm-Message-State: AOAM533CsblRqaI8fAK6naRkTApFqorvhuRrTGSqATJ7PZqkuxDKAMVd YWzgEn4Pu0P0uS/NU0JT28I= X-Google-Smtp-Source: ABdhPJxESgo/rkutIMHVbmiA9JJvXnqgXfWq+hL9O5Y9YxHLj0S+dAiI2e/Sxy8WF/AyK9oo/R81hA== X-Received: by 2002:a17:90a:3e4e:: with SMTP id t14mr1175504pjm.217.1603140354046; Mon, 19 Oct 2020 13:45:54 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id y4sm399856pgl.67.2020.10.19.13.45.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 11/23] drm/msm: Drop chatty trace Date: Mon, 19 Oct 2020 13:46:12 -0700 Message-Id: <20201019204636.139997-12-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It is somewhat redundant with the gpu tracepoints, and anyways not too useful to justify spamming the log when debug traces are enabled. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 015f6b884e2e..ed6645aa0ae5 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -545,7 +545,6 @@ static void recover_worker(struct work_struct *work) static void hangcheck_timer_reset(struct msm_gpu *gpu) { - DBG("%s", gpu->name); mod_timer(&gpu->hangcheck_timer, round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); } From patchwork Mon Oct 19 20:46:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97406C43457 for ; Mon, 19 Oct 2020 20:46:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 40C58223FB for ; Mon, 19 Oct 2020 20:46:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gCoamua6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729026AbgJSUp6 (ORCPT ); Mon, 19 Oct 2020 16:45:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727826AbgJSUp4 (ORCPT ); Mon, 19 Oct 2020 16:45:56 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85745C0613D0; Mon, 19 Oct 2020 13:45:56 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id 1so433192ple.2; Mon, 19 Oct 2020 13:45:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fvZYFhP3VqPsfoYbfdMUsTMvP/ufY5s7G5S6HW9OQ+A=; b=gCoamua6+Y/FNX0HBUgPdlBCFfSl5fHsyZQW6PFZqdfRp4eixW5Dc52Gc4Ymp7vaUB xV0ShIROogslBIMuat2v0hbWitoh1v/jzylJaPDTq7DYHMJqDWysd1g8P57dT6hIeVgt zioo870zgFiryNm6ZeDk9F3XHcyO4Mt3g/+pdDMXzwLrFOoRZuLQhODf+8I3q7p8jDma YSZGezYABEssxfV1ose1kt0H10ZzmYzAkugtsw/8n9NWOJQqIJlMBJhvNy4yXoMcShrS /4YqN8LlSrV4+NsIS5fr7QgLF9D6xzM781HPdZdC/uc23pjOTSP6Il2PNAFagojGuFFi skLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fvZYFhP3VqPsfoYbfdMUsTMvP/ufY5s7G5S6HW9OQ+A=; b=N9H/iXhCpkHDzQTpNB96Z3cRfQjOn8IMHrAcO8TB3NsxuNvJP6mk4shWkG1e1u7PiG NZHdA8UGWdkUlEP/M6f5nKUJ+fLhOfytEM4aEGYnFUx3zOl7VEoxBRr51V3S9F12xsCy W0fjgH4oOFs9V64eFd2Gc1SZC3gRpomLGf7qAWpsXZPCqkqdWWRtOEp4pFeC7qlUKzfR vcSx1IO14xZkHwsWUK4J/7/MI6KU+gALQg4y/v9b+U/51PQloavNNXHRxSfkGfW58PlO C/JCVLzzeHLmvdndFPtjSMzgQ+B31xkG8ZhMCknlixQtyWsbN9GjSHtAVkU6tV9Ck9Si BcMA== X-Gm-Message-State: AOAM530Wkghx0CIiwbdLteIKUxgU5rOWwnavmQeyhYMxio2GsvC67ERc 6aSDRytVLm6uPsPF7cmIi5qvk85y3QIvZA== X-Google-Smtp-Source: ABdhPJx4uP7FkLFS7jSBBS2QDkYGnePLAdS4husaJzLRznxFtjXZrwD3iT4uiSG03gb95j5KJcVuug== X-Received: by 2002:a17:902:8d8a:b029:d5:ef48:90de with SMTP id v10-20020a1709028d8ab02900d5ef4890demr971523plo.12.1603140356077; Mon, 19 Oct 2020 13:45:56 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id r6sm661328pfg.85.2020.10.19.13.45.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:55 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 12/23] drm/msm: Move update_fences() Date: Mon, 19 Oct 2020 13:46:13 -0700 Message-Id: <20201019204636.139997-13-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Small cleanup, update_fences() is used in the hangcheck path, but also in the normal retire path. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index ed6645aa0ae5..1667d8066897 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -265,6 +265,20 @@ int msm_gpu_hw_init(struct msm_gpu *gpu) return ret; } +static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, + uint32_t fence) +{ + struct msm_gem_submit *submit; + + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno > fence) + break; + + msm_update_fence(submit->ring->fctx, + submit->fence->seqno); + } +} + #ifdef CONFIG_DEV_COREDUMP static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset, size_t count, void *data, size_t datalen) @@ -413,20 +427,6 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, * Hangcheck detection for locked gpu: */ -static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, - uint32_t fence) -{ - struct msm_gem_submit *submit; - - list_for_each_entry(submit, &ring->submits, node) { - if (submit->seqno > fence) - break; - - msm_update_fence(submit->ring->fctx, - submit->fence->seqno); - } -} - static struct msm_gem_submit * find_submit(struct msm_ringbuffer *ring, uint32_t fence) { From patchwork Mon Oct 19 20:46:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5379BC433E7 for ; Mon, 19 Oct 2020 20:46:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4D0C223FB for ; Mon, 19 Oct 2020 20:46:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Tjjx5ZaT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726152AbgJSUqu (ORCPT ); Mon, 19 Oct 2020 16:46:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729233AbgJSUp7 (ORCPT ); Mon, 19 Oct 2020 16:45:59 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20373C0613CE; Mon, 19 Oct 2020 13:45:59 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id y1so420736plp.6; Mon, 19 Oct 2020 13:45:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ogL3L1B6WLttRDyzqm7bV7L40gSAtYDvVkLUv7bCtLk=; b=Tjjx5ZaTnnHcPaL205Ne9ekuoxSlQfz9bfxobD4lnx3/SAZshZHoHNw+eDAMvawBcx sY/xrpzeHyk5rb/oxn1o7dw6NgPfz3XOPEyPcvZGpr4ezxgcBu0TQp4fu5mJxysWnA/g 99RlUq054UDyH8W44Td8qY2+BTE8eXnL+eNNvr5aZwnVc1cjbOaNbsu9KlzrOZfdrn9T FEkcQUhPUHDHU39nrVKGRV/isuST97VKGzTnBikFq9vIY8e+Vrct2LHrlcsXNgtZZZr2 gfFWd8z0PyFTWKl21eL7DtXhglSNDAsUKdf5Hg5IlzPIRegveKy/FlogndOzQyFKDLAR vjVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ogL3L1B6WLttRDyzqm7bV7L40gSAtYDvVkLUv7bCtLk=; b=iM6/yvAQBh45UhbSdX90G4xoBWvLJrd53px7Q6MYvf/MqtYikgnWqFoRTtDCB45Sp5 wSHtdkTleT37gIUxJebRyV/8yxNGX6m3t5w/g1ija44n4ZNxbMuVDWWpMZWkEaPgNE0l AiKk6WUrdXFZWFVjfGQXYh1r+GeVVgFT7iHcgddnJg6PTJrIyVyCiKvknSxlpGFXD/DQ qnPt3NPZSpXTrDEVwcE7D7Zu4Fza73/BtkJe2sduLU+20H58UEFcsjM1WOdYcMXtoYxe hbwjjSbytKftZVQUQY3xt3DQ5M9PH9qjpiUgI4pi7PD84txmUkr8N00dEWrknIDdaRu+ jGTg== X-Gm-Message-State: AOAM533qwTx2+kIsHzMLkt9vCI8FGuQMqinXfDnrHqrcNETF8p3x/f1/ v6dOP40hARCqnkp7ktCJ874= X-Google-Smtp-Source: ABdhPJzG0upDSr8hJ7y8DP2B2HubyUea7HwnultSaTU4b4YovLjSDU/NeeKjCNZMmXBif8wm9GX/0A== X-Received: by 2002:a17:90b:608:: with SMTP id gb8mr1238936pjb.6.1603140358622; Mon, 19 Oct 2020 13:45:58 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id j138sm635258pfd.19.2020.10.19.13.45.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:45:57 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 13/23] drm/msm: Add priv->mm_lock to protect active/inactive lists Date: Mon, 19 Oct 2020 13:46:14 -0700 Message-Id: <20201019204636.139997-14-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Rather than relying on the big dev->struct_mutex hammer, introduce a more specific lock for protecting the bo lists. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_debugfs.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.h | 13 +++++++++++- drivers/gpu/drm/msm/msm_gem.c | 28 +++++++++++++++----------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 12 +++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 5 ++++- 6 files changed, 58 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c index ee2e270f464c..64afbed89821 100644 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ b/drivers/gpu/drm/msm/msm_debugfs.c @@ -112,6 +112,11 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) { struct msm_drm_private *priv = dev->dev_private; struct msm_gpu *gpu = priv->gpu; + int ret; + + ret = mutex_lock_interruptible(&priv->mm_lock); + if (ret) + return ret; if (gpu) { seq_printf(m, "Active Objects (%s):\n", gpu->name); @@ -121,6 +126,8 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) seq_printf(m, "Inactive Objects:\n"); msm_gem_describe_objects(&priv->inactive_list, m); + mutex_unlock(&priv->mm_lock); + return 0; } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 70bc4bb69edc..15c41786d018 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -468,6 +469,12 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) init_llist_head(&priv->free_list); INIT_LIST_HEAD(&priv->inactive_list); + mutex_init(&priv->mm_lock); + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&priv->mm_lock); + fs_reclaim_release(GFP_KERNEL); drm_mode_config_init(ddev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 713a0ae28125..7431d68ea102 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -174,8 +174,19 @@ struct msm_drm_private { struct msm_rd_state *hangrd; /* debugfs to dump hanging submits */ struct msm_perf_state *perf; - /* list of GEM objects: */ + /* + * List of inactive GEM objects. Every bo is either in the inactive_list + * or gpu->active_list (for the gpu it is active on[1]) + * + * These lists are protected by mm_lock. If struct_mutex is involved, it + * should be aquired prior to mm_lock. One should *not* hold mm_lock in + * get_pages()/vmap()/etc paths, as they can trigger the shrinker. + * + * [1] if someone ever added support for the old 2d cores, there could be + * more than one gpu object + */ struct list_head inactive_list; + struct mutex mm_lock; /* worker for delayed free of objects: */ struct work_struct free_work; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 3dcb2ef4740f..092ed152999e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -768,13 +768,17 @@ int msm_gem_sync_object(struct drm_gem_object *obj, void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + struct msm_drm_private *priv = obj->dev->dev_private; + + might_sleep(); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); + mutex_unlock(&priv->mm_lock); } } @@ -783,12 +787,14 @@ void msm_gem_active_put(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_drm_private *priv = obj->dev->dev_private; - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + might_sleep(); if (!atomic_dec_return(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); } } @@ -943,13 +949,16 @@ static void free_object(struct msm_gem_object *msm_obj) { struct drm_gem_object *obj = &msm_obj->base; struct drm_device *dev = obj->dev; + struct msm_drm_private *priv = dev->dev_private; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); + mutex_lock(&priv->mm_lock); list_del(&msm_obj->mm_list); + mutex_unlock(&priv->mm_lock); msm_gem_lock(obj); @@ -1123,14 +1132,9 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); } - if (struct_mutex_locked) { - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - } else { - mutex_lock(&dev->struct_mutex); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); - } + mutex_lock(&priv->mm_lock); + list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); return obj; @@ -1198,9 +1202,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, msm_gem_unlock(obj); - mutex_lock(&dev->struct_mutex); + mutex_lock(&priv->mm_lock); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); + mutex_unlock(&priv->mm_lock); return obj; diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 2dc0ffa925b4..6be073b8ca08 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -51,6 +51,8 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return 0; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (!msm_gem_trylock(&msm_obj->base)) continue; @@ -59,6 +61,8 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) msm_gem_unlock(&msm_obj->base); } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -78,6 +82,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return SHRINK_STOP; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; @@ -90,6 +96,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) msm_gem_unlock(&msm_obj->base); } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -112,6 +120,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) if (!msm_gem_shrinker_lock(dev, &unlock)) return NOTIFY_DONE; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (!msm_gem_trylock(&msm_obj->base)) continue; @@ -129,6 +139,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) break; } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 6c9e1fdc1a76..1806e87600c0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -94,7 +94,10 @@ struct msm_gpu { struct msm_ringbuffer *rb[MSM_GPU_MAX_RINGS]; int nr_rings; - /* list of GEM active objects: */ + /* + * List of GEM active objects on this gpu. Protected by + * msm_drm_private::mm_lock + */ struct list_head active_list; /* does gpu need hw_init? */ From patchwork Mon Oct 19 20:46:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ACDDC433E7 for ; Mon, 19 Oct 2020 20:46:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C98C1223FB for ; Mon, 19 Oct 2020 20:46:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="rSMb/E5z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725876AbgJSUqG (ORCPT ); Mon, 19 Oct 2020 16:46:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729526AbgJSUqE (ORCPT ); Mon, 19 Oct 2020 16:46:04 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EF95C0613CE; Mon, 19 Oct 2020 13:46:04 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id n9so682050pgt.8; Mon, 19 Oct 2020 13:46:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+QvSzUsv0gkDi3ZLkzoLS6KaK9/0zRVEGNQPcdDAdis=; b=rSMb/E5zzRZKIMIscEuQX2bt4jU0S3rMW5406oMfwjp7HVjvPZLHMzlAc3bbaUviSg FAoWWrEh5iV0A4XctGmzIkSKEvrHOgLfSs1OV23JlBOXTtbFvVVIex/QcD5KFb+PoKV+ H4NzkV3NWWB4X5mhZmrP2fj6ow4MTpbSF7D155dStVaSqFJ3xuuEqepnozK0cI8GYCHA qSw2If3aZjS9fA1At5Smlq8OmIvSFHkc29lPX95wQxlD1Tm05OCQ7vGpTwhvk31gxmiB 1eZbyqkgN3pgCzB4MpED05pdqM0db4DoWofxbmOwtHRWXEepOuz5ZEwu1+wJ3KXWsr/T Q+vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+QvSzUsv0gkDi3ZLkzoLS6KaK9/0zRVEGNQPcdDAdis=; b=SwDCmnLm+ySQpu0pYF74qu5Mo1w1HgDpZ8+4iIYBda1kIfvH+oe6WQy6ejAg1HqgpG oBfexuA2Ah36CfpSBFRXO3lfmchR5grMCbS7Q4pnhFCYdvzjw8qluDm6xnWBs1Tcy+aC GxKcfZDt5kpxJXIUKqB3smW6fcJ2hlhQ3yl589evaRHFyTX9cAm5sIy4Jd6zL5FI+ihH ++XL+TplbiXsZp/uemaA6YlJfaYy3qosp1MNdPa74HvcFJ7XJaBzTBa4rsahkJgAtrbj +PFBIY59IJPzvKJPt/iLrDZ8AqAotgIaz2ppqkjd0JCITNqJFX+Dv1loRHhkxac0Vh5x H6MA== X-Gm-Message-State: AOAM530Bs/uyyiB/Tn1CSo7jsNb3KIwo7bY7eRTMKpKxuzTidzd28Asz HRmwAN17chOy37oOznsNURg= X-Google-Smtp-Source: ABdhPJxzCNmy38hszZHv7lGYloZsGPEogjz5xrlbooVfmR5Vn+5oEPG5tbou2dWQ4VAeQWfr/pLeuQ== X-Received: by 2002:a05:6a00:138e:b029:15c:ced4:f5b4 with SMTP id t14-20020a056a00138eb029015cced4f5b4mr1487195pfg.12.1603140363819; Mon, 19 Oct 2020 13:46:03 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id f1sm596030pfe.145.2020.10.19.13.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:02 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , Eric Anholt , Bjorn Andersson , AngeloGioacchino Del Regno , Emil Velikov , "Gustavo A. R. Silva" , Jonathan Marek , Akhil P Oommen , Sharat Masetty , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 14/23] drm/msm: Document and rename preempt_lock Date: Mon, 19 Oct 2020 13:46:15 -0700 Message-Id: <20201019204636.139997-15-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before adding another lock, give ring->lock a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 ++++++------ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 7 ++++++- 5 files changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index b2593c6bd2ac..2befaf304f04 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -36,7 +36,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, OUT_RING(ring, upper_32_bits(shadowptr(a5xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -44,7 +44,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 7e04509c4e1f..183de1139eeb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -45,9 +45,9 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) if (!ring) return; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); gpu_write(gpu, REG_A5XX_CP_RB_WPTR, wptr); } @@ -62,9 +62,9 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu) bool empty; struct msm_ringbuffer *ring = gpu->rb[i]; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); empty = (get_wptr(ring) == ring->memptrs->rptr); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); if (!empty) return ring; @@ -132,9 +132,9 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu) } /* Make sure the wptr doesn't update while we're in motion */ - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Set the address of the incoming preemption record */ gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0894703a742e..5dddb9163bd3 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -65,7 +65,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) OUT_RING(ring, upper_32_bits(shadowptr(a6xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -73,7 +73,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 935bf9b1d941..1b6958e908dc 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,7 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); - spin_lock_init(&ring->lock); + spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 0987d6bf848c..4956d1bc5d0e 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -46,7 +46,12 @@ struct msm_ringbuffer { struct msm_rbmemptrs *memptrs; uint64_t memptrs_iova; struct msm_fence_context *fctx; - spinlock_t lock; + + /* + * preempt_lock protects preemption and serializes wptr updates against + * preemption. Can be aquired from irq context. + */ + spinlock_t preempt_lock; }; struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, From patchwork Mon Oct 19 20:46:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B772BC43457 for ; Mon, 19 Oct 2020 20:46:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5699B223FD for ; Mon, 19 Oct 2020 20:46:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Vd+AECdg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729642AbgJSUqI (ORCPT ); Mon, 19 Oct 2020 16:46:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729583AbgJSUqH (ORCPT ); Mon, 19 Oct 2020 16:46:07 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9C54C0613CE; Mon, 19 Oct 2020 13:46:06 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id o9so409689plx.10; Mon, 19 Oct 2020 13:46:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jslkY1PBZlMBA8knA4FbguBtmYUP5NUKMbgG0IT+Sg8=; b=Vd+AECdgZtvfraEU7ZuTuhjGHlFKubZajpPSmkqs7FpRPzjEpfNIrgCmsZqMRTeAmk F4jy9JwVjx64oxgf6LKoGhltWHsU66q11VI/0MmSN0eglfYtIdBAgZrq/yXGY2NX9LSg VLrPq/Rub81p30WFq4a1SoGyNlBzUN/ygS0GNyRsPhpITeHSGILH6RB4ilFi/s3WAUAv StDpG5bHtitcVME7LmhCbxTz92V4u7rtenNV3Qk7/l6D4/Gu6cUPBKMXGiDf/eDBZtoW wsORXX3HdGt4fwZYNur1YIfZPoEh6H5x/jUxi/ZIEtgmo+04soQ30TEUnvodivamFUf2 cbnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jslkY1PBZlMBA8knA4FbguBtmYUP5NUKMbgG0IT+Sg8=; b=sevPbmQGcNjbnCv0ADEfk9KaBZjy0kR/NWd55FODF0SP/qbXuTOJNcACCx8MmI7Be9 xHmYc/0rcxNAdjhQRhqG3XGlxqf7Xe1GMQZ+Fj2aq5DcaCq1Ol4hwewyJxl7Sy2nM98H lmOh0If5lEVUmkoFIhGalr8t1tIxNHrY7xRMPKz94s87NqqL3TUNAfGcd6ZTnzmOcV+I lQeOWpHtkyBv36Q7Jjid3kSsT8wPGtIITyB85HhbPA2NkaX8FN3TVoskLHQ2I+ZEECVf TcutPo2kewZQnqorRxsU0lXzTxWh6UImnlQkVlSizjK3sO9RFViDI5QZ90thJHwydQOJ 2LGw== X-Gm-Message-State: AOAM53144PBUmV3BtDYaIn/qkpzsE8SAghpuFJ/0hb/HNVWTNszONLzI 2c3NOLUph9XjOVJI/XO388s= X-Google-Smtp-Source: ABdhPJyHcj5Pu1F8vdFGvZj+UHR6Hh0YyHbHzFT2Fkn9oz5e7WJumLfHIoFLcANMtIOOXOhD9PbA4g== X-Received: by 2002:a17:90b:383:: with SMTP id ga3mr1289811pjb.2.1603140366308; Mon, 19 Oct 2020 13:46:06 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id j20sm654779pfd.40.2020.10.19.13.46.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 15/23] drm/msm: Protect ring->submits with it's own lock Date: Mon, 19 Oct 2020 13:46:16 -0700 Message-Id: <20201019204636.139997-16-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark One less place to rely on dev->struct_mutex. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gem_submit.c | 2 ++ drivers/gpu/drm/msm/msm_gpu.c | 37 ++++++++++++++++++++++------ drivers/gpu/drm/msm/msm_ringbuffer.c | 1 + drivers/gpu/drm/msm/msm_ringbuffer.h | 6 +++++ 4 files changed, 39 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 50ecc8455197..c078b58d9c10 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -65,7 +65,9 @@ void msm_gem_submit_free(struct msm_gem_submit *submit) unsigned i; dma_fence_put(submit->fence); + spin_lock(&submit->ring->submit_lock); list_del(&submit->node); + spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 1667d8066897..1d6f3dc3fe78 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -270,6 +270,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, { struct msm_gem_submit *submit; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) { if (submit->seqno > fence) break; @@ -277,6 +278,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_update_fence(submit->ring->fctx, submit->fence->seqno); } + spin_unlock(&ring->submit_lock); } #ifdef CONFIG_DEV_COREDUMP @@ -432,11 +434,14 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence) { struct msm_gem_submit *submit; - WARN_ON(!mutex_is_locked(&ring->gpu->dev->struct_mutex)); - - list_for_each_entry(submit, &ring->submits, node) - if (submit->seqno == fence) + spin_lock(&ring->submit_lock); + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno == fence) { + spin_unlock(&ring->submit_lock); return submit; + } + } + spin_unlock(&ring->submit_lock); return NULL; } @@ -533,8 +538,10 @@ static void recover_worker(struct work_struct *work) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) gpu->funcs->submit(gpu, submit); + spin_unlock(&ring->submit_lock); } } @@ -721,7 +728,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { struct drm_device *dev = gpu->dev; - struct msm_gem_submit *submit, *tmp; int i; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); @@ -730,9 +736,24 @@ static void retire_submits(struct msm_gpu *gpu) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; - list_for_each_entry_safe(submit, tmp, &ring->submits, node) { - if (dma_fence_is_signaled(submit->fence)) + while (true) { + struct msm_gem_submit *submit = NULL; + + spin_lock(&ring->submit_lock); + submit = list_first_entry_or_null(&ring->submits, + struct msm_gem_submit, node); + spin_unlock(&ring->submit_lock); + + /* + * If no submit, we are done. If submit->fence hasn't + * been signalled, then later submits are not signalled + * either, so we are also done. + */ + if (submit && dma_fence_is_signaled(submit->fence)) { retire_submit(gpu, ring, submit); + } else { + break; + } } } } @@ -775,7 +796,9 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; + spin_lock(&ring->submit_lock); list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); msm_rd_dump_submit(priv->rd, submit, NULL); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 1b6958e908dc..4d2a2a4abef8 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,6 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); + spin_lock_init(&ring->submit_lock); spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 4956d1bc5d0e..fe55d4a1aa16 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -39,7 +39,13 @@ struct msm_ringbuffer { int id; struct drm_gem_object *bo; uint32_t *start, *end, *cur, *next; + + /* + * List of in-flight submits on this ring. Protected by submit_lock. + */ struct list_head submits; + spinlock_t submit_lock; + uint64_t iova; uint32_t seqno; uint32_t hangcheck_fence; From patchwork Mon Oct 19 20:46:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53DB0C433DF for ; Mon, 19 Oct 2020 20:46:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E167223FD for ; Mon, 19 Oct 2020 20:46:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hPmHco/U" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728020AbgJSUqo (ORCPT ); Mon, 19 Oct 2020 16:46:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729722AbgJSUqJ (ORCPT ); Mon, 19 Oct 2020 16:46:09 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CAF7C0613CE; Mon, 19 Oct 2020 13:46:09 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id g29so700221pgl.2; Mon, 19 Oct 2020 13:46:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cvN5SUIFveAI0zNeWeKf81AuIGKhvHUdYM1O7AZtY9w=; b=hPmHco/U9VNlWr9x9hBn/2DTJqe0MiSOpnxHUr3h5foy0bStDdyI3asawh8+Z6TSE6 1KO7WoRJPc1bb2NhV7AKp3nq0Ew1lK5KIWXKyjHyNQ4uXp0n6wwZF7ksc5s2eLCKMkX3 tn3Nd/roCRU7jFayXw/pnZiSVDSbniJiifeJvRXc7SZjGM71UPQjKOzv0SeEe+nIT2mK XVuFobA0t3Xx9y2sUIvwVL+8iBYNVC9F3au7ZgqFnXjAEvV2mNQ+71jkvtDglv0MaLCR N2O+s3CsLcWKYne+zrs2zDrNAWMQmj5i7rP+7CrX5+zxMNeUUVXCHSuW/h+BoIEBLAv/ omWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cvN5SUIFveAI0zNeWeKf81AuIGKhvHUdYM1O7AZtY9w=; b=rGGvgAVlqVYF0+8wpifcgAz41mIqU2OdgGARwooJoiV54n/HMeJNCnWDPxRm2W5NJK tOBzp9AA+pXrfzPTqSGdsy3sYtVY5EDrdlqd81S/PnoEAdl+Q9cpd1/4Biyc766Dlkij K5OYuvUD0Kl4+GT4ynbVSz+futit5MxSukh8mXqLrgglESkMY2Tgf7ooAGDBWTYfuttj LYqQjxrABtp4JQLVUj1O7y0sHKzgPlPr1kpFPfAf/IGN+bEjFrkYOu8oPMlQj7yp4egd ebOjbNUn6B8iK6mSqakSnE0smt0Tzcrz4EGUdYUzDeqP21wOpN+26XkBL/Aaf0Lnffff Y64Q== X-Gm-Message-State: AOAM5305unT5xdGumQXRvCOkHjuRRRh8ZGFd5Gtb7DBYyUzNRpi34qv8 qaVXA3E/61LJ1p0m6Mm7GiM= X-Google-Smtp-Source: ABdhPJwOx1HPpvbUzBgm3xEaswV42HXqlHJA/MYNvKLgxnfY+1bYkFIpmUJl4O84XKg/hDg75jHM+g== X-Received: by 2002:a63:1805:: with SMTP id y5mr1322308pgl.174.1603140368587; Mon, 19 Oct 2020 13:46:08 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id k206sm666585pfd.126.2020.10.19.13.46.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 16/23] drm/msm: Refcount submits Date: Mon, 19 Oct 2020 13:46:17 -0700 Message-Id: <20201019204636.139997-17-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before we remove dev->struct_mutex from the retire path, we have to deal with the situation of a submit retiring before the submit ioctl returns. To deal with this, ring->submits will hold a reference to the submit, which is dropped when the submit is retired. And the submit ioctl path holds it's own ref, which it drops when it is done with the submit. Also, add to submit list *after* getting/pinning bo's, to prevent badness in case the completed fence is corrupted, and retire_worker mistakenly believes the submit is done too early. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_drv.h | 1 - drivers/gpu/drm/msm/msm_gem.h | 13 +++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 11 +++++------ drivers/gpu/drm/msm/msm_gpu.c | 21 ++++++++++++++++----- 4 files changed, 34 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 7431d68ea102..7e6fb4af4964 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -277,7 +277,6 @@ void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); bool msm_use_mmu(struct drm_device *dev); -void msm_gem_submit_free(struct msm_gem_submit *submit); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f0608d96ef03..2f289c436ddd 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -213,6 +213,7 @@ void msm_gem_free_work(struct work_struct *work); * lasts for the duration of the submit-ioctl. */ struct msm_gem_submit { + struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; struct msm_gem_address_space *aspace; @@ -249,6 +250,18 @@ struct msm_gem_submit { } bos[]; }; +void __msm_gem_submit_destroy(struct kref *kref); + +static inline void msm_gem_submit_get(struct msm_gem_submit *submit) +{ + kref_get(&submit->ref); +} + +static inline void msm_gem_submit_put(struct msm_gem_submit *submit) +{ + kref_put(&submit->ref, __msm_gem_submit_destroy); +} + /* helper to determine of a buffer in submit should be dumped, used for both * devcoredump and debugfs cmdstream dumping: */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c078b58d9c10..d784e97f233f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -42,6 +42,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, if (!submit) return NULL; + kref_init(&submit->ref); submit->dev = dev; submit->aspace = queue->ctx->aspace; submit->gpu = gpu; @@ -60,14 +61,13 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, return submit; } -void msm_gem_submit_free(struct msm_gem_submit *submit) +void __msm_gem_submit_destroy(struct kref *kref) { + struct msm_gem_submit *submit = + container_of(kref, struct msm_gem_submit, ref); unsigned i; dma_fence_put(submit->fence); - spin_lock(&submit->ring->submit_lock); - list_del(&submit->node); - spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); @@ -841,8 +841,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); - if (ret) - msm_gem_submit_free(submit); + msm_gem_submit_put(submit); out_unlock: if (ret && (out_fence_fd >= 0)) put_unused_fd(out_fence_fd); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 1d6f3dc3fe78..bcd9b4fa98b2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -722,7 +722,12 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, pm_runtime_mark_last_busy(&gpu->pdev->dev); pm_runtime_put_autosuspend(&gpu->pdev->dev); - msm_gem_submit_free(submit); + + spin_lock(&ring->submit_lock); + list_del(&submit->node); + spin_unlock(&ring->submit_lock); + + msm_gem_submit_put(submit); } static void retire_submits(struct msm_gpu *gpu) @@ -796,10 +801,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; - spin_lock(&ring->submit_lock); - list_add_tail(&submit->node, &ring->submits); - spin_unlock(&ring->submit_lock); - msm_rd_dump_submit(priv->rd, submit, NULL); update_sw_cntrs(gpu); @@ -826,6 +827,16 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) msm_gem_active_get(drm_obj, gpu); } + /* + * ring->submits holds a ref to the submit, to deal with the case + * that a submit completes before msm_ioctl_gem_submit() returns. + */ + msm_gem_submit_get(submit); + + spin_lock(&ring->submit_lock); + list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); + gpu->funcs->submit(gpu, submit); priv->lastctx = submit->queue->ctx; From patchwork Mon Oct 19 20:46:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7805C388EC for ; Mon, 19 Oct 2020 20:46:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 539F522403 for ; Mon, 19 Oct 2020 20:46:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OJA+wyn6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729953AbgJSUqL (ORCPT ); Mon, 19 Oct 2020 16:46:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729583AbgJSUqL (ORCPT ); Mon, 19 Oct 2020 16:46:11 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53971C0613CE; Mon, 19 Oct 2020 13:46:11 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id v12so405236ply.12; Mon, 19 Oct 2020 13:46:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dS6hhGeeNo+o5stc1m3klvV0MU6/9n7XgjyibfTeKXY=; b=OJA+wyn6JLDMhTDG8WBs5XhLEe+X/qKDVSrazR34UOgIx3ndbLBu0K4nMx9XTq+BvL stx+uZloX5mxvTaRfAG3QyomeD2JM5nfXfqLP+i66IwToKfmXgQU0ke3NcyxY2Nfwt8P Q2nom7x7JnBoZa+2BBHPbFHEt34vdYCpkNC3Pq+NID7HLLtY+D4V7ZdPkTNLFbSHliPO mcflihkizr6cjRY48OVU97eerG9StIrxFZuN3jDX+4Iv4PidM/FBT5m6z9omtu6xRvA5 oqnP7ZQ1pzIgjM2mmlJR5fvtc23X6dksKe0PZO/ne//8W/nUBzp/Pnwf2rT6vnNXXVG0 cECA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dS6hhGeeNo+o5stc1m3klvV0MU6/9n7XgjyibfTeKXY=; b=dcQC3W2mhbrD5YbNPy1o1PK+v+JCMbkbajh7rzcWmbNYfh8dCp7phkXxzEmq+XVMtK /nZwWwUjPW0YMpTMSGW9q34xbEfFFcjEh8Vvu0n01wC+X0DCV5qw5/lfG3g0s3EcSgOO gmzX7F5f7mf39s8Vbp/r/KG5M36wIEtYEbyHE/bGZ9N7IwzKAmy+LEnTWBvTQX6ULaAn 3tXh3s9UZWKqo+7zJS6SqIrFHOEuGJFSFcLNGZGNixotVzO2cYQ9ShuxqbTWFuJcVRCz t+70K+Pr4yOj7z2yW2Q+zvTjA6Azw2iy6tuFopxGPAcHU4m8zFfBJi4krhpOHa5dqOwn 4d5g== X-Gm-Message-State: AOAM532rkIQzgyEhwMQUtrL16DXjlIz3qsvAPF2fp5gEATDTi8QwZXM/ Pw0Omc3/KK57kfZQ2oToMAw= X-Google-Smtp-Source: ABdhPJz2M/XIYWLX70uK3IG3ufRB/qD6L5QWHxAYEGC47AuIcoKtxDnfwWscnnNh8cWb5PKS+LXQiA== X-Received: by 2002:a17:90b:ecb:: with SMTP id gz11mr1215247pjb.25.1603140370892; Mon, 19 Oct 2020 13:46:10 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id n139sm600427pfd.167.2020.10.19.13.46.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 17/23] drm/msm: Remove obj->gpu Date: Mon, 19 Oct 2020 13:46:18 -0700 Message-Id: <20201019204636.139997-18-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It cannot be atomically updated with obj->active_count, and the only purpose is a useless WARN_ON() (which becomes a buggy WARN_ON() once retire_submits() is not serialized with incoming submits via struct_mutex) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 2 -- drivers/gpu/drm/msm/msm_gem.h | 1 - drivers/gpu/drm/msm/msm_gpu.c | 5 ----- 3 files changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 092ed152999e..e4876498be47 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -775,7 +775,6 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) if (!atomic_fetch_inc(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); mutex_unlock(&priv->mm_lock); @@ -791,7 +790,6 @@ void msm_gem_active_put(struct drm_gem_object *obj) if (!atomic_dec_return(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); mutex_unlock(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 2f289c436ddd..f4e73c6f07bf 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -64,7 +64,6 @@ struct msm_gem_object { * */ struct list_head mm_list; - struct msm_gpu *gpu; /* non-null if active */ /* Transiently in the process of submit ioctl, objects associated * with the submit are on submit->bo_list.. this only lasts for diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index bcd9b4fa98b2..d0f625112a97 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -810,11 +810,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct drm_gem_object *drm_obj = &msm_obj->base; uint64_t iova; - /* can't happen yet.. but when we add 2d support we'll have - * to deal w/ cross-ring synchronization: - */ - WARN_ON(is_active(msm_obj) && (msm_obj->gpu != gpu)); - /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); From patchwork Mon Oct 19 20:46:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B80E0C43467 for ; Mon, 19 Oct 2020 20:46:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 66CBD22404 for ; Mon, 19 Oct 2020 20:46:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OidWH+e5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730101AbgJSUqP (ORCPT ); Mon, 19 Oct 2020 16:46:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730036AbgJSUqN (ORCPT ); Mon, 19 Oct 2020 16:46:13 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84A02C0613CE; Mon, 19 Oct 2020 13:46:13 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id w21so675154pfc.7; Mon, 19 Oct 2020 13:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DjvKpJElljP1CGKDtyawiR3HJwFiK2kIC8fsb2FEuqQ=; b=OidWH+e5WlxD+T+8SIqVgzIV0i57nSDWJzr8VsRClLS9yF3wX3zBj1yfnQp7tmCcPb 1zse+ri+TKCix/FWRIeuSFDzhusItfK4B0cc8xH/bet7cUNNzMGcHWvIbBC2I9bC2QB9 uT/3O55E+w4b/BEVAU3YaoE1kN0spOpIELIuhKL/EUYGACTtShBjQLu+kvKNEVGWA9Ug hxuUOXrlidD47Ixp2+g8lZ266tUI+Nly9/OSeA4xM4y3KpW4a7ms5Z9jHOaRlLyzKn5d yPbyZws1wS0dcj+imFZSta+mXIn/uwqraMBdKbaB2mxguT04kh0AHY/FzqYD2lWSvs3y Mejw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DjvKpJElljP1CGKDtyawiR3HJwFiK2kIC8fsb2FEuqQ=; b=GChlJRIE8x5EYZG1AHQ//53LNpfhNnArSnVPQmd4643FMPKQ8GlpiZxLi8BPBqXlXi BUjMCzuQGgLQvAp9pzMWXOjDEsztErLL6f0ubClIJ9sLJEMngwBAQQVPYzXvvrNpvtfS KSjjZAxtWhH0cxuF/2HEK7r5UGpv7VZFq9pkw1tBGSYvvHQpH9dk80KgjNkB2SwFjkk9 H1iHuqyeWgBePPvwUUPz3gcWcG2AVj0TW2OVnWGu/BkkJUfnaL/PwFc6a4CLb9+MlTSC U19H9dxFgfb1vXn6Nyaa4j3nG6gkXnY4e97yIc7hInAcw55uET/Y6z7h2fblDObrG1Cr 2Mpg== X-Gm-Message-State: AOAM531lhbsz7Z3EGyIeEPx477ZQCnIkV6Wl9NslOxHimxEqopvgn++0 tR35dI8kL5l6Xvqz6ZI01IE= X-Google-Smtp-Source: ABdhPJy4A2xGRFyNaIiOmsSayiLrZ8+3WMkfP8VzoC7DsZ27VqqF4SvAljhJDjshN/gS2YxPj4quHg== X-Received: by 2002:a63:370f:: with SMTP id e15mr1360758pga.124.1603140373089; Mon, 19 Oct 2020 13:46:13 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id y10sm612527pff.119.2020.10.19.13.46.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:12 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 18/23] drm/msm: Drop struct_mutex from the retire path Date: Mon, 19 Oct 2020 13:46:19 -0700 Message-Id: <20201019204636.139997-19-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we are not relying on dev->struct_mutex to protect the ring->submits lists, drop the struct_mutex lock. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d0f625112a97..30ba3beaad0a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -717,7 +717,7 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_gem_active_put(&msm_obj->base); msm_gem_unpin_iova(&msm_obj->base, submit->aspace); - drm_gem_object_put_locked(&msm_obj->base); + drm_gem_object_put(&msm_obj->base); } pm_runtime_mark_last_busy(&gpu->pdev->dev); @@ -732,11 +732,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { - struct drm_device *dev = gpu->dev; int i; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* Retire the commits starting with highest priority */ for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; @@ -766,15 +763,12 @@ static void retire_submits(struct msm_gpu *gpu) static void retire_worker(struct work_struct *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, retire_work); - struct drm_device *dev = gpu->dev; int i; for (i = 0; i < gpu->nr_rings; i++) update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence); - mutex_lock(&dev->struct_mutex); retire_submits(gpu); - mutex_unlock(&dev->struct_mutex); } /* call from irq handler to schedule work to retire bo's */ From patchwork Mon Oct 19 20:46:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A868C433DF for ; Mon, 19 Oct 2020 20:46:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B92E6223FD for ; Mon, 19 Oct 2020 20:46:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="F/JS5BSQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730311AbgJSUqT (ORCPT ); Mon, 19 Oct 2020 16:46:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730147AbgJSUqP (ORCPT ); Mon, 19 Oct 2020 16:46:15 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EA61C0613CE; Mon, 19 Oct 2020 13:46:15 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id q21so666957pgi.13; Mon, 19 Oct 2020 13:46:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=22Jlev5gAzt2xdCMyh1bV9HE1I+PJVS0r6/a1b11meI=; b=F/JS5BSQUzZBgIosKtn3TjchC81I0HCUqgLhuGxddGfI29vzgf0/Ac+CvyfgjYVX7Y 9fnF/gV2IXCYc/v/KgfMuGWeKiO5LNpRVkzAIdMRgZsQCXcqbakNFumug8d7X4GbaqYA 0v03SCROmNrVe/c0el2xA2+fhPkTmMso332FVWDaggSahQQnMCzm69dluPdhPWQ2fxtG FAHuK740cXxPj6WXkaHi0e8eBpUTMpb+LolIpkW8Gs6Ymm3X2WaYzuDs2JM6w0+pc14l V2DgxHZ9q0CNDsabOz/bjoQq538V1ErhcL75B+/up0bZSyblJgUWzzeRSA3BQh/stxj7 Y0fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=22Jlev5gAzt2xdCMyh1bV9HE1I+PJVS0r6/a1b11meI=; b=VsNY6hnaLQTqdR/1KwcyGMEsiLoitnKhVK09N1D4YvkXE6+EoTgz4p3d/+yp0LL1El 0jLyJSo6ZawSCkGa4mPXH8GE98RF6ANkPX4LQCGsuO8p7HJzFCaJRqOKQwtb6mE7Imv0 CgzXa1bdtQEtgG/9+8kn8p/Nen1IACgpdlat+aRHTsaYiFIz2+Rzk9l1JR9m38+QyVbw KJWNRds75tmfWGaaLrOrLB8rgsCxUEwqtjxit+SFpLNLTjuPHlSNiez1Xhp3ujJx7h3K yi54sMqgmhZ1hM6famC6jsNLrBSyuFTb8GKRqYp5XtEvqsTWFykezgIYKCknAa+QK6O2 eDFw== X-Gm-Message-State: AOAM531Z3dZmdiiTdidae5TwbReTGGErAhSzyo2ZB9D80JJKS7iZezP6 qWJCAijPp8nbA8Mb4wdMKls= X-Google-Smtp-Source: ABdhPJwdzx2KBhZc3edpmg8ERbRwwRTRFA0QkUXk+3oFfZHlQ54Q/dXufyuu8YqMSWhzmvx0D3JrxA== X-Received: by 2002:a63:3fc7:: with SMTP id m190mr1281127pga.293.1603140375067; Mon, 19 Oct 2020 13:46:15 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id s20sm590362pfc.201.2020.10.19.13.46.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 19/23] drm/msm: Drop struct_mutex in free_object() path Date: Mon, 19 Oct 2020 13:46:20 -0700 Message-Id: <20201019204636.139997-20-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that active_list/inactive_list is protected by mm_lock, we no longer need dev->struct_mutex in the free_object() path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e4876498be47..af1abddca78e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -949,8 +949,6 @@ static void free_object(struct msm_gem_object *msm_obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -987,20 +985,14 @@ void msm_gem_free_work(struct work_struct *work) { struct msm_drm_private *priv = container_of(work, struct msm_drm_private, free_work); - struct drm_device *dev = priv->dev; struct llist_node *freed; struct msm_gem_object *msm_obj, *next; while ((freed = llist_del_all(&priv->free_list))) { - - mutex_lock(&dev->struct_mutex); - llist_for_each_entry_safe(msm_obj, next, freed, freed) free_object(msm_obj); - mutex_unlock(&dev->struct_mutex); - if (need_resched()) break; } From patchwork Mon Oct 19 20:46:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22C16C433E7 for ; Mon, 19 Oct 2020 20:46:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D475B22404 for ; Mon, 19 Oct 2020 20:46:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gsfijITF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730295AbgJSUqS (ORCPT ); Mon, 19 Oct 2020 16:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730036AbgJSUqR (ORCPT ); Mon, 19 Oct 2020 16:46:17 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8432CC0613CE; Mon, 19 Oct 2020 13:46:17 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id d23so419566pll.7; Mon, 19 Oct 2020 13:46:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H6+sBeRhptapBUb8S8VOKmxlxCqweD+FvkkgvD/zDmU=; b=gsfijITFoKDCthWkpijN4xeAaLrn564jSmYZFjpeBoiq47kQM7s0bwWLNhfoZyDGgk BQSUmpckTpA9dkkYZXQIiXMpa01LnEQepytwc0LLJDM9eVFVMcYidiWz4PFMYOyk/Br7 UiPY7Wlb79+QE1Zyib9KIENCI3Xnn5AHM1g4+yfgtEfS9hJNb/9K/UcNyp2RmGhyP+s/ zR/uAcZ5fHOsFiplUH3WLp1c1cB/i2xpT/W4FO/euSmQUEsSbD0fF1uhabV4Yr7qWL1W iZzVR5TMrTF3DANC+MyE0ZMqoJL22HVzb/yIkDI74glAXpQ94o0ZnENAn2ZRyLASV/8w XUqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H6+sBeRhptapBUb8S8VOKmxlxCqweD+FvkkgvD/zDmU=; b=UDiYyXXTb5QCpHJlEWVtbn6bs5P9y/9XBrvFl2TJcpYcyLkIDQuQkAQuiRg6iz0QgL EZeT2UNc23eSXjtmzwqSb7vf4Ha3bfRJTHsO4zEHbMCYHbd5TKCt9qbtqY/R1yu3gcSc td1bdTek2hg9dgtIUYP807XJdiI6M2puUBk0Orqe1t/46ElDpPboGQbszn3uoKEz+yiV Hx/te86FKYJRlIEFECwMqTC+ub9UHOxfO9dHMKK+HWzZXXmVmowJLyhJg7xWBScCN7g3 iFJrkSC7+hUl3eNwVJtQhoy72nlkOzVIxfz4eRoorOvwJ8vIijjaQZm/5TpDUM9ih/Ke gNIA== X-Gm-Message-State: AOAM531gq//KLgM2EGAPYyQ098KV8BDsCGmeVO98lTzmS+2WB/SWhWQU sVx4MruO14bEpYKABFEv8yg= X-Google-Smtp-Source: ABdhPJx+azay6jZ0FALaNW1mfwsshSz1cP+yEKa0SIK5foZIwDG66VXP4R70dUW6zO59EYMv3auEAQ== X-Received: by 2002:a17:902:d896:b029:d2:288e:bafc with SMTP id b22-20020a170902d896b02900d2288ebafcmr1737571plz.43.1603140377045; Mon, 19 Oct 2020 13:46:17 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id m22sm591316pfk.214.2020.10.19.13.46.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 20/23] drm/msm: Remove msm_gem_free_work Date: Mon, 19 Oct 2020 13:46:21 -0700 Message-Id: <20201019204636.139997-21-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we don't need struct_mutex in the free path, we can get rid of the asynchronous free altogether. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 --- drivers/gpu/drm/msm/msm_drv.h | 5 ----- drivers/gpu/drm/msm/msm_gem.c | 27 --------------------------- drivers/gpu/drm/msm/msm_gem.h | 1 - 4 files changed, 36 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 15c41786d018..ebcd8e827363 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -465,9 +465,6 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) priv->wq = alloc_ordered_workqueue("msm", 0); - INIT_WORK(&priv->free_work, msm_gem_free_work); - init_llist_head(&priv->free_list); - INIT_LIST_HEAD(&priv->inactive_list); mutex_init(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 7e6fb4af4964..5308e636a90c 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -188,10 +188,6 @@ struct msm_drm_private { struct list_head inactive_list; struct mutex mm_lock; - /* worker for delayed free of objects: */ - struct work_struct free_work; - struct llist_head free_list; - struct workqueue_struct *wq; unsigned int num_planes; @@ -291,7 +287,6 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -void msm_gem_free_work(struct work_struct *work); int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct msm_gem_address_space *aspace); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index af1abddca78e..827c7397ed12 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -939,16 +939,6 @@ void msm_gem_free_object(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - if (llist_add(&msm_obj->freed, &priv->free_list)) - queue_work(priv->wq, &priv->free_work); -} - -static void free_object(struct msm_gem_object *msm_obj) -{ - struct drm_gem_object *obj = &msm_obj->base; - struct drm_device *dev = obj->dev; - struct msm_drm_private *priv = dev->dev_private; - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -981,23 +971,6 @@ static void free_object(struct msm_gem_object *msm_obj) kfree(msm_obj); } -void msm_gem_free_work(struct work_struct *work) -{ - struct msm_drm_private *priv = - container_of(work, struct msm_drm_private, free_work); - struct llist_node *freed; - struct msm_gem_object *msm_obj, *next; - - while ((freed = llist_del_all(&priv->free_list))) { - llist_for_each_entry_safe(msm_obj, next, - freed, freed) - free_object(msm_obj); - - if (need_resched()) - break; - } -} - /* convenience method to construct a GEM buffer object, and userspace handle */ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f4e73c6f07bf..ffa2130ee97d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -204,7 +204,6 @@ static inline bool is_vunmapable(struct msm_gem_object *msm_obj) void msm_gem_purge(struct drm_gem_object *obj); void msm_gem_vunmap(struct drm_gem_object *obj); -void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, * associated with the cmdstream submission for synchronization (and From patchwork Mon Oct 19 20:46:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD8FBC43457 for ; Mon, 19 Oct 2020 20:46:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D897223FB for ; Mon, 19 Oct 2020 20:46:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lQiRyf31" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731111AbgJSUqi (ORCPT ); Mon, 19 Oct 2020 16:46:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730323AbgJSUqT (ORCPT ); Mon, 19 Oct 2020 16:46:19 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E6C1C0613CE; Mon, 19 Oct 2020 13:46:19 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id p21so436588pju.0; Mon, 19 Oct 2020 13:46:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aTCwqlrE+xKt7if15fCxKqrxxhA3FtRVNMOwP5PT4Lc=; b=lQiRyf31C0/Ej7UYaPakIGicBUJ7uesIyGJzUO/UuqSwfUqD+kzkG3k5R18XcweAPP Hk293Zif+bAz301ecdq7UnNqxvrZpLtyCyHS8c+SjcYvkyWksOOPfeNOW/gvOmLQEDt7 JjaDTx1kOEMhELU/4paQt2qqhrbzS3/z+hfmOlOtDHcbXLYwkFbNlXI+mZh6N6XyelMY kgfN+WsQO2UDUuqhK7ZNJcT+4CSRvHkAKZrpiWUU2GsGqpzUn5/PBLo5LR56QKOTYVn3 UTp/nbDHsroWMPEvkfjgRMPGUUWjEhdSAqUtpnai0t8Hg84NRfFqM+gNcAfBwnySFz0k agug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aTCwqlrE+xKt7if15fCxKqrxxhA3FtRVNMOwP5PT4Lc=; b=oCg2LMEmrFPLXWaL6/mHyx3qP4Du0dT7eu5t1a5j8FeflgMLRDWKKeqF+YiX0rrQl3 cAGtYqS2LFjfp8DFakwkrl1RGfMe0qu+1HU8aS0qSiWF576DVnVULHSv6Y0hovjfOGMK eJoxHJxS2wRPepxuFP1+fPWec4cixb8PkTCGKqYr8V5oJTWR+mWC4Ljl5Dt/bRW0+jQ7 7SNAKwEiEzIDjzUgjC6e1kbM5mz2DBwWaVjmUOaU+5LCHZy4XQtaO9l21M67alw7GXnz wiOq1fV/jXStZfttck5qE2SKFLThDbbGV/Shy8uv3XGBUI5E94D55CFWoL5AB0GihCx1 tfIg== X-Gm-Message-State: AOAM531mbeMR2MuDO/JKy2z7z0B7+Z0Isz9QwTgqMEK9yhtCLzdTH9cS aVQOH40WWlQK77o4eUMDBr4= X-Google-Smtp-Source: ABdhPJxb4qDhDnNtxb5u5Z2RNuHoLk18uRth47UdbZOLmGEZpHUKqV0Vi9ia7S0WasQJzfJxy/zSTw== X-Received: by 2002:a17:90a:65cc:: with SMTP id i12mr1182680pjs.193.1603140379143; Mon, 19 Oct 2020 13:46:19 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id w5sm433422pgf.61.2020.10.19.13.46.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 21/23] drm/msm: Drop struct_mutex in madvise path Date: Mon, 19 Oct 2020 13:46:22 -0700 Message-Id: <20201019204636.139997-22-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The obj->lock is sufficient for what we need. This *does* have the implication that userspace can try to shoot themselves in the foot by racing madvise(DONTNEED) with submit. But the result will be about the same if they did madvise(DONTNEED) before the submit ioctl, ie. they might not get want they want if they race with shrinker. But iova fault handling is robust enough, and userspace is only shooting it's own foot. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 11 ++--------- drivers/gpu/drm/msm/msm_gem.c | 4 +--- drivers/gpu/drm/msm/msm_gem.h | 2 -- 3 files changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ebcd8e827363..4d808769e6ed 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -939,14 +939,9 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, return -EINVAL; } - ret = mutex_lock_interruptible(&dev->struct_mutex); - if (ret) - return ret; - obj = drm_gem_object_lookup(file, args->handle); if (!obj) { - ret = -ENOENT; - goto unlock; + return -ENOENT; } ret = msm_gem_madvise(obj, args->madv); @@ -955,10 +950,8 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, ret = 0; } - drm_gem_object_put_locked(obj); + drm_gem_object_put(obj); -unlock: - mutex_unlock(&dev->struct_mutex); return ret; } diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 827c7397ed12..c39ba9030001 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -673,8 +673,6 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) msm_gem_lock(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); - if (msm_obj->madv != __MSM_MADV_PURGED) msm_obj->madv = madv; @@ -691,7 +689,6 @@ void msm_gem_purge(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); @@ -771,6 +768,7 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) struct msm_drm_private *priv = obj->dev->dev_private; might_sleep(); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ffa2130ee97d..d79e7019cc88 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -190,8 +190,6 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); - WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; } From patchwork Mon Oct 19 20:46:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 292364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F1A6C433E7 for ; Mon, 19 Oct 2020 20:46:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C788622403 for ; Mon, 19 Oct 2020 20:46:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="F5CqVaSe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730512AbgJSUqY (ORCPT ); Mon, 19 Oct 2020 16:46:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730484AbgJSUqW (ORCPT ); Mon, 19 Oct 2020 16:46:22 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E84CAC0613CE; Mon, 19 Oct 2020 13:46:21 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id a1so477385pjd.1; Mon, 19 Oct 2020 13:46:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yl78H6FykXko61jjzuR6toYgoDz9Ij1D+rGGGstD6+I=; b=F5CqVaSem1fOWZA5kFYFpdHDdyUQH92ZzxvMdWgEeRr0j1hSLvi7REkQ2H1/SrRrrW KgeNr9xNNTBhK1rdaFEFdu2phRbBihVzFmNaDpYSs1no/enQRUezsEV4Oh4hUPkNqgDM R1uZgI37Tq3uXBU8V4PyWDy3kTpMmh/dj3sv5GkbDpFYNHetkcoXsEri7wURY/egO9iy 678+fnyAGbFZ90kUqEcwWvfsh7VasPnESUrA4hZdiMZwe8qV2Go5KOPT6KRRCTtQ5R5U WJzgR8zQ+KeP8S1ve0vVwcXJ79OF4kYlsNGBmk3jY+YOB6eoreFSGqAKC+1HuGtEzBIM Q5+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yl78H6FykXko61jjzuR6toYgoDz9Ij1D+rGGGstD6+I=; b=lc9govL4xFnEMy8IZlDk0mTXZ2YV7m+YywXSmDPOQ0onZfX3IPEQmZhRni9s5VZGMZ Qj50LyxegMC+hvT49wQSLAHBfg++P2IxVpTgGMwreWgw0uJwrK+39EkTYNBDqqt1nc4z F20ud3LzvuIc+aR7xkHR8rAvl3Ma0lqAJnAAMe9aG5cNfDUYZoEYJits7SkjNNTQlOtz 4i4twuYjWCj1NobqIvLmwO8VWlMLJ3h3ZrzMT61K/pNGiX1ClcOTtMaeCWK9UfNswbVK K42vqFxfzS+YUWWEc5NHNqSml2eh8w8D6VxJtjobjHXTOJmrXF/Nqx6nD4W7tEPSJHqc DNlQ== X-Gm-Message-State: AOAM531nBptykueDIfPUOJFs7qBHM16lqXbi8mvAawzG8gB2ms35yHXn IVbk0xCzGBP99NzVsoL2Xjs= X-Google-Smtp-Source: ABdhPJwo9Km9skGWUCMPtfCCtceQCR4T7n1x+Zh77CCB7Xk8unvv/syQMaCAKPt++G5LKzGMjPGRNQ== X-Received: by 2002:a17:902:bf07:b029:d5:cdbc:cd8e with SMTP id bi7-20020a170902bf07b02900d5cdbccd8emr1687951plb.30.1603140381459; Mon, 19 Oct 2020 13:46:21 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id f17sm615950pfq.141.2020.10.19.13.46.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 22/23] drm/msm: Drop struct_mutex in shrinker path Date: Mon, 19 Oct 2020 13:46:23 -0700 Message-Id: <20201019204636.139997-23-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that the inactive_list is protected by mm_lock, and everything else on per-obj basis is protected by obj->lock, we no longer depend on struct_mutex. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 1 - drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 -------------------------- 2 files changed, 55 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index c39ba9030001..cf17c79d99ae 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -688,7 +688,6 @@ void msm_gem_purge(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 6be073b8ca08..6f4b1355725f 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -8,48 +8,13 @@ #include "msm_gem.h" #include "msm_gpu_trace.h" -static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) -{ - /* NOTE: we are *closer* to being able to get rid of - * mutex_trylock_recursive().. the msm_gem code itself does - * not need struct_mutex, although codepaths that can trigger - * shrinker are still called in code-paths that hold the - * struct_mutex. - * - * Also, msm_obj->madv is protected by struct_mutex. - * - * The next step is probably split out a seperate lock for - * protecting inactive_list, so that shrinker does not need - * struct_mutex. - */ - switch (mutex_trylock_recursive(&dev->struct_mutex)) { - case MUTEX_TRYLOCK_FAILED: - return false; - - case MUTEX_TRYLOCK_SUCCESS: - *unlock = true; - return true; - - case MUTEX_TRYLOCK_RECURSIVE: - *unlock = false; - return true; - } - - BUG(); -} - static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long count = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return 0; mutex_lock(&priv->mm_lock); @@ -63,9 +28,6 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - return count; } @@ -74,13 +36,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long freed = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return SHRINK_STOP; mutex_lock(&priv->mm_lock); @@ -98,9 +55,6 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); @@ -112,13 +66,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) { struct msm_drm_private *priv = container_of(nb, struct msm_drm_private, vmap_notifier); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned unmapped = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return NOTIFY_DONE; mutex_lock(&priv->mm_lock); @@ -141,9 +90,6 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - *(unsigned long *)ptr += unmapped; if (unmapped > 0) From patchwork Mon Oct 19 20:46:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 284988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0A22C433E7 for ; Mon, 19 Oct 2020 20:46:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6EC22223FB for ; Mon, 19 Oct 2020 20:46:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KaeucAx9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730861AbgJSUqa (ORCPT ); Mon, 19 Oct 2020 16:46:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730147AbgJSUqY (ORCPT ); Mon, 19 Oct 2020 16:46:24 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06EFEC0613CE; Mon, 19 Oct 2020 13:46:24 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id h6so693163pgk.4; Mon, 19 Oct 2020 13:46:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lZOmjAfebeIR7qmEAaLdEquc4ZpzVD3xby9O6rpIYDw=; b=KaeucAx9Nj30YybE6s+Jf6uDf6b1cN+oLUXUan8AmuAChdfmpnDNN1NSm47rlazZtF TBpVKIftFGFBzbv2LbvpRLD23lBjLepiq1T866p9/tWxlECEVBK/WvA4cgNmQfY8dmlD xV5pdA1EwPp0n1zQYbcGQ8ckG+Ft4AIz7E6dTP0kynmdiEq5sElCaHk3uA7Lu0Lv1nal hUG3tZSSTuZMPBvcjOjSssYMfINLT83yU+s5ZWqsxJm+IDHy0/7yTy0cL+2dRxLg1VF+ f6T11cHukNtctrIXRNDwmu99wPM1xlvZ1YghdQySzICISN2fTLAC70HhlIbDTD3ECEbM TbIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lZOmjAfebeIR7qmEAaLdEquc4ZpzVD3xby9O6rpIYDw=; b=Ss6ejEHY4MEeaxOVLjMMJi9hqfdGKgeGVfnvJZk5eOpjqA/B/RtcgOsyNnX5RDyQ4n hog0qegcO5Ndhl3D/D14kxBaUQ4m1oXKN0vijkIHQ9tGPJjmdmqRS3LEE/RPJoKqq0ci rQN0EifuW94Nj8h/0RBVC6oq4ncGKb5sMNt8z3GENK7/ri1n7KUXVMEBiVkwsoqp6eIz nGtGklN9/XnIAPEDB95P5/kzB5Wf24lAQuAXIG46esNbeniVMVqudzQFIgj7gAW9KqSY dYHxGMK+0DU7o9dpkIQoucpL03AbSQVyBrBnnHOg1ogHSEgDuSD+BEpWLomw7KsHNZbq XnGQ== X-Gm-Message-State: AOAM532BjJabNTBV42N12JzVw4f5pdimErud2BBTMaN34r92SYFlUml5 NC8yNQUz8g0cDd8Y1V45RIg= X-Google-Smtp-Source: ABdhPJxTFXUNhuydkPVXKdX4h4Tx8iM7i4oIj42HJR551i80t3OhePRoJxTyglW8CodaOQ5oAL6aQw== X-Received: by 2002:a63:1a64:: with SMTP id a36mr1328463pgm.153.1603140383538; Mon, 19 Oct 2020 13:46:23 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id u15sm582027pfl.215.2020.10.19.13.46.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 13:46:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 23/23] drm/msm: Don't implicit-sync if only a single ring Date: Mon, 19 Oct 2020 13:46:24 -0700 Message-Id: <20201019204636.139997-24-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201019204636.139997-1-robdclark@gmail.com> References: <20201019204636.139997-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark If there is only a single ring (no-preemption), everything is FIFO order and there is no need to implicit-sync. Mesa should probably just always use MSM_SUBMIT_NO_IMPLICIT, as behavior is undefined when fences are not used to synchronize buffer usage across contexts (which is the only case where multiple different priority rings could come into play). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index d784e97f233f..96832debc3b6 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -277,7 +277,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) return ret; } -static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) +static int submit_fence_sync(struct msm_gem_submit *submit, bool implicit_sync) { int i, ret = 0; @@ -297,7 +297,7 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; } - if (no_implicit) + if (!implicit_sync) continue; ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx, @@ -768,7 +768,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - ret = submit_fence_sync(submit, !!(args->flags & MSM_SUBMIT_NO_IMPLICIT)); + ret = submit_fence_sync(submit, (gpu->nr_rings > 1) && + !(args->flags & MSM_SUBMIT_NO_IMPLICIT)); if (ret) goto out;