From patchwork Wed Aug 2 22:21:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 709243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 444BBC001DF for ; Wed, 2 Aug 2023 22:25:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233679AbjHBWZL (ORCPT ); Wed, 2 Aug 2023 18:25:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233594AbjHBWYs (ORCPT ); Wed, 2 Aug 2023 18:24:48 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64CB53C23; Wed, 2 Aug 2023 15:23:42 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-686b9964ae2so219537b3a.3; Wed, 02 Aug 2023 15:23:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691014954; x=1691619754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BZNKuTKrcBgRO6bXVxaTsNFykZlM8LhlqXHhRV9iIMg=; b=ZeNQnTSw63gUqdIOiPmnXIl0jX9pddEHQZXlodZ5XGlcKEU4VI8DVud5T5Sncrvzm+ sTi+cCt4G2gmYZc96sOEumuGVzZWP/n53luobPPCLPUM/VQfz2zzqFfn1BSrgqlqwgTb cMmaAox48K2vOi/m46kqcc7Xnw0Rp/BCKPcUwitEPj/r11H3GdM0NUN7SwCwBl5d2B5T tBqqXP4ChvxDYQKfXVv138ks9WbvfPU/dOdNf+TeqwYVZmfl7INc8xDCyHFcSxfS4ViK Uwbw+eDR5leGqPeVH/qqdomg8yavyTPpjAeG7H9gIb9mxy+vUghACbc05ID5RPpMLmSm 9lng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691014954; x=1691619754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BZNKuTKrcBgRO6bXVxaTsNFykZlM8LhlqXHhRV9iIMg=; b=fEtfAUDu7V9X4qHyRmZxEfbADGBXObzZFz8BXazTurDza2xmBH3AAWtHcXDlNUzLMw TSI/42s4PZ2lBtgMdGy0/Q82H+oFYqB1/cQqnxbU+ssq7eOzxqYWFiNXj/FCaqs5KO0o URAP6JSW9mT0Fg83N4oLU2voy/PPaGPlictVKlWD6YWYbse6/FMCJwzVgRFWAMtD/v4U JA2vwHkqgTskGVgM75Khy5IN2XKi+cbKqGK2q4VfLh8hZsbXnqOq8S7GTX8x8xct7Txe +m7faT4WRo/fvknBzkjkObhhEIFYOMqlNNBTw/tFUzIg91yyD+YMbGnUXA8pTd4l/7OQ zIbA== X-Gm-Message-State: ABy/qLYLHlMMOGNnsDl5KnTFeO4hXInPSjcyYlEg14NM4QZeiyglTbcl 9ZMTLTUbJe35rb7viFXNzso= X-Google-Smtp-Source: APBJJlG+Tf4GmWBUQWIaN/BFbth8eef9yEFZAVMQd4BC1XPs2NLxeysSzwbkRAkCpLiBfaDWCJ+XGA== X-Received: by 2002:a05:6a20:840b:b0:137:514a:9835 with SMTP id c11-20020a056a20840b00b00137514a9835mr20526626pzd.9.1691014953945; Wed, 02 Aug 2023 15:22:33 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:33de:aac3:fe1d:788]) by smtp.gmail.com with ESMTPSA id y16-20020aa78550000000b0066a6059d399sm11558227pfn.116.2023.08.02.15.22.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Aug 2023 15:22:33 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 1/4] drm/msm: Take lru lock once per job_run Date: Wed, 2 Aug 2023 15:21:49 -0700 Message-ID: <20230802222158.11838-2-robdclark@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230802222158.11838-1-robdclark@gmail.com> References: <20230802222158.11838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Rather than acquiring it and dropping it for each individual obj. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 3 --- drivers/gpu/drm/msm/msm_ringbuffer.c | 5 +++++ 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 20cfd86d2b32..6d1dbffc3905 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -509,14 +509,11 @@ void msm_gem_unpin_locked(struct drm_gem_object *obj) */ void msm_gem_unpin_active(struct drm_gem_object *obj) { - struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&priv->lru.lock); msm_obj->pin_count--; GEM_WARN_ON(msm_obj->pin_count < 0); update_lru_active(obj); - mutex_unlock(&priv->lru.lock); } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index b60199184409..8b8353dcde9f 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -16,10 +16,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct msm_gem_submit *submit = to_msm_submit(job); struct msm_fence_context *fctx = submit->ring->fctx; struct msm_gpu *gpu = submit->gpu; + struct msm_drm_private *priv = gpu->dev->dev_private; int i; msm_fence_init(submit->hw_fence, fctx); + mutex_lock(&priv->lru.lock); + for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; @@ -28,6 +31,8 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); } + mutex_unlock(&priv->lru.lock); + /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); From patchwork Wed Aug 2 22:21:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 709242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5C78C001DF for ; Wed, 2 Aug 2023 22:25:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230096AbjHBWZO (ORCPT ); Wed, 2 Aug 2023 18:25:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229822AbjHBWYu (ORCPT ); Wed, 2 Aug 2023 18:24:50 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99F5D4C13; Wed, 2 Aug 2023 15:23:45 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-686daaa5f1fso230492b3a.3; Wed, 02 Aug 2023 15:23:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691014959; x=1691619759; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yd/VVNkbn8eRWa9AFb76TUL57WJCORlpU9228+xGrY0=; b=lPR/NrHwKF4JSBXUl3221/Iija3L2c/5nEIUUMS/H+X7JsDbCMvpayuW8B7Y+/lTy1 ZD6nLyfNlIcOiOBkrYvnH0Q9L222DYucBCiXeOy8Z9t15l2y33Ur1IZVZGT0MKURLNnQ +QykUcT1k+Yf0YFv9WLq8ZiPaz7xJUVPsXYN44ICFBltcKAXnCoTDMhlIMB1U8V0ai3p DIOp/6aWskgGFbx/ZVj+XZC80t8OzoUlJ/9nyGlgQynoeK+tL3XzOHbMGo2QmHC438Pp 6a9VtJQKAE6b6932LjVBoUOhr+Ak5eBpZZI5MYA5kCzxtWCc1Y841Z0g4CLIxlxXKrWm K3EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691014959; x=1691619759; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yd/VVNkbn8eRWa9AFb76TUL57WJCORlpU9228+xGrY0=; b=ZCWOznA6TY34xUYLlAnYB6kvZcWb5tx0jXXAnPVnc3x8DxUgvkGC6S8B9citrkl7K9 EC1BNqGbUHlbM3+L+G22bbF3nskTH1ePX6HbFv7B85Qu/+Oty3F87c8AUDzWVAQor0ou 1xB8AFpf1EpeF3W12BpS5r333JDz3p+0I/KQQzvLeHfqRb0Wj3N5jVoKxAw8pJX0+12n hy7eIEuTMUAktOcnDbS4wj+1hHPrHLTeCyigMg9OygYnv4iuc6jemnB5l9pZWy/JxauN zCYJsVAW2g7LWGomV19EJJ+KwsV0lD1uJOmv2cFjzPdSe5HzqW8u4uqfPRQjywkLbntj nGIg== X-Gm-Message-State: ABy/qLZo/psF44hIUT82rUDAfj65oZ3DTj5fbxeGnzmJpWGRYXMHqrhc gG6WvvHyDIwQlGuQHewV/Xw= X-Google-Smtp-Source: APBJJlHJkEwqGgqujsPSaKqsVyfQPzRFdYR9RXuxckgyFEPvA7z7f5rzzMZj9dZso+0o6j6apv27Og== X-Received: by 2002:a05:6a00:1307:b0:681:d247:8987 with SMTP id j7-20020a056a00130700b00681d2478987mr16100440pfu.17.1691014959493; Wed, 02 Aug 2023 15:22:39 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:33de:aac3:fe1d:788]) by smtp.gmail.com with ESMTPSA id x18-20020aa793b2000000b0067a50223e3bsm11529646pff.111.2023.08.02.15.22.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Aug 2023 15:22:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 3/4] drm/msm: Take lru lock once per submit_pin_objects() Date: Wed, 2 Aug 2023 15:21:51 -0700 Message-ID: <20230802222158.11838-4-robdclark@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230802222158.11838-1-robdclark@gmail.com> References: <20230802222158.11838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Split out pin_count incrementing and lru updating into a separate loop so we can take the lru lock only once for all objs. Since we are still holding the obj lock, it is safe to split this up. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 45 ++++++++++++++++++---------- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 10 ++++++- 3 files changed, 39 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 6d1dbffc3905..1c81ff6115ac 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -222,9 +222,7 @@ static void put_pages(struct drm_gem_object *obj) static struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj, unsigned madv) { - struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct page **p; msm_gem_assert_locked(obj); @@ -234,16 +232,29 @@ static struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj, return ERR_PTR(-EBUSY); } - p = get_pages(obj); - if (IS_ERR(p)) - return p; + return get_pages(obj); +} + +/* + * Update the pin count of the object, call under lru.lock + */ +void msm_gem_pin_obj_locked(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + + msm_gem_assert_locked(obj); + + to_msm_bo(obj)->pin_count++; + drm_gem_lru_move_tail_locked(&priv->lru.pinned, obj); +} + +static void pin_obj_locked(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; mutex_lock(&priv->lru.lock); - msm_obj->pin_count++; - update_lru_locked(obj); + msm_gem_pin_obj_locked(obj); mutex_unlock(&priv->lru.lock); - - return p; } struct page **msm_gem_pin_pages(struct drm_gem_object *obj) @@ -252,6 +263,8 @@ struct page **msm_gem_pin_pages(struct drm_gem_object *obj) msm_gem_lock(obj); p = msm_gem_pin_pages_locked(obj, MSM_MADV_WILLNEED); + if (!IS_ERR(p)) + pin_obj_locked(obj); msm_gem_unlock(obj); return p; @@ -463,7 +476,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **pages; - int ret, prot = IOMMU_READ; + int prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) prot |= IOMMU_WRITE; @@ -480,11 +493,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - ret = msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); - if (ret) - msm_gem_unpin_locked(obj); - - return ret; + return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); } void msm_gem_unpin_locked(struct drm_gem_object *obj) @@ -536,8 +545,10 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, return PTR_ERR(vma); ret = msm_gem_pin_vma_locked(obj, vma); - if (!ret) + if (!ret) { *iova = vma->iova; + pin_obj_locked(obj); + } return ret; } @@ -700,6 +711,8 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) if (IS_ERR(pages)) return ERR_CAST(pages); + pin_obj_locked(obj); + /* increment vmap_count *before* vmap() call, so shrinker can * check vmap_count (is_vunmapable()) outside of msm_obj lock. * This guarantees that we won't try to msm_gem_vunmap() this diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 31b370474fa8..2ddd896aac68 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -142,6 +142,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); +void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages(struct drm_gem_object *obj); void msm_gem_unpin_pages(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a03bdded1a15..b17561ebd518 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -384,6 +384,7 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) static int submit_pin_objects(struct msm_gem_submit *submit) { + struct msm_drm_private *priv = submit->dev->dev_private; int i, ret = 0; submit->valid = true; @@ -403,7 +404,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].flags |= BO_OBJ_PINNED | BO_VMA_PINNED; + submit->bos[i].flags |= BO_VMA_PINNED; submit->bos[i].vma = vma; if (vma->iova == submit->bos[i].iova) { @@ -416,6 +417,13 @@ static int submit_pin_objects(struct msm_gem_submit *submit) } } + mutex_lock(&priv->lru.lock); + for (i = 0; i < submit->nr_bos; i++) { + msm_gem_pin_obj_locked(submit->bos[i].obj); + submit->bos[i].flags |= BO_OBJ_PINNED; + } + mutex_unlock(&priv->lru.lock); + return ret; }