From patchwork Wed Jul 17 18:33:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Herring X-Patchwork-Id: 169155 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1320452ilk; Wed, 17 Jul 2019 11:34:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqxDaN20O8bVaYrdrXNhUjWjBqDPQKMjHSQ4uSw2Xpd8YhpGAkP+Zq/W8noX6D9saFrXqgEt X-Received: by 2002:a63:f959:: with SMTP id q25mr42626247pgk.357.1563388443624; Wed, 17 Jul 2019 11:34:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563388443; cv=none; d=google.com; s=arc-20160816; b=D3gdkoPmgkTYtHziJZVXJ/BajmtPDD7gWOIJpg9Cz3IFnX1ouNEgPXKUsweChBNy/r PbjbIvZfyvelYoGXvTDgOU6JYfiVji4HHCRa7ULPKDo3ypc3wQU6zc0frzd3+jZ3nxZU npYj9pZS5EmSZtCnCiz9N0PbVne5tpZitiHFsHOIaaH8+qMUYJ6FcpQNwSc3FllpirBY Biv7SIrtCpdJNOKUZnMgs3yrR3J1DiEjsA4ZH9hvGha2IP+VvjtC9V0JZ+BeW/CVZbc5 /abN8ytigAsdnuGAj75CK0hTz/DFjIzuUnN6t6ep5DyIIQszqh3uubKFjjjlOfS20BDQ G8yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:mime-version:references:in-reply-to:message-id:date :subject:to:from:delivered-to; bh=AWt3utnf86dAq89KZxVrkS3kSwgOXMWru5K6YL4Cbbo=; b=BAOuFiFBaQ13kNpOls6elj8dSMzyXCk+o/XStZsN69WZJi7URjSVCRC0sAoSMv7VbG tl7OOzUF6Lk0G0B5FmYMyVOJDNsXOrj60B/s9XIwaGHAWdmVAcVLVdnLOdyhnwL25CJZ +qVD/YoOywrHBTe0hJxIH77qD1pvCY5KyoxPBlttks2n8eTsbKDdPJmshju2yHfoc42J wdNkd0ogoLHjozlg+Dcfpf1BsAEUNR+pz/56EoKmy8ty/wNfyJDCl4t0gPH1/u8Eiwqa P2hPDJfkSWeGlPqrsDcLx9k91XNJkM6BTjrYZirJXeHfRCmwzXM96t7c2b6AIXDcZNdJ vrVA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from gabe.freedesktop.org (gabe.freedesktop.org. [131.252.210.177]) by mx.google.com with ESMTPS id v125si3252512pgb.262.2019.07.17.11.34.03 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 17 Jul 2019 11:34:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) client-ip=131.252.210.177; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 35FC56E29B; Wed, 17 Jul 2019 18:34:02 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id A5CC16E29C for ; Wed, 17 Jul 2019 18:33:59 +0000 (UTC) Received: by mail-io1-f65.google.com with SMTP id h6so47311575iom.7 for ; Wed, 17 Jul 2019 11:33:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zI+tmVRUkurXZjNBv1bfI0XbFf0ZY8o21WL73cStRdc=; b=BVYZNag1OPW/YBU9dIpuiLJ1U3tGzhSbVoPTmvRVRfn3qOoXSEW/O4t/VUiMiT5JWt MrEfe3cQPdR1M412nInyYAYqdZTR+Nsz0WaWGHIsXhJYJGray8q9SqsrYYVLK7Tj5wel g8HNNQ5jAWq5lrXXrfCVRJgKpfZBHIXy9MD6BzvyQbFGNUj6s9EbZSBg0vT7EPSMJo+3 FrYTDL1yfuWoYzugPT2AMKuhqA+SJOQbYu+UxeV18sPyyKr8SiLviAxDfqC1/Zfwh0B7 6fuIygUTPL04/ioUgOQnZWy8BrDtytPPNsppyoCn4n50lpZ52ELAZb3LqOb23r7VVyQt T5Lg== X-Gm-Message-State: APjAAAWNrjuBJKojgpo5DHWJHm7OefRXJKmi56V6oOq2PhZuqGcMl+Ey DN9uyMTtQ+y4vLmA4YN2VMtuehk= X-Received: by 2002:a5d:9a04:: with SMTP id s4mr39925709iol.19.1563388438607; Wed, 17 Jul 2019 11:33:58 -0700 (PDT) Received: from xps15.herring.priv ([64.188.179.255]) by smtp.googlemail.com with ESMTPSA id v10sm22109472iob.43.2019.07.17.11.33.57 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 17 Jul 2019 11:33:58 -0700 (PDT) From: Rob Herring To: dri-devel@lists.freedesktop.org Subject: [PATCH 4/5] drm/panfrost: Add support for GPU heap allocations Date: Wed, 17 Jul 2019 12:33:51 -0600 Message-Id: <20190717183352.22519-4-robh@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190717183352.22519-1-robh@kernel.org> References: <20190717183352.22519-1-robh@kernel.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Boris Brezillon , Robin Murphy , Alyssa Rosenzweig , Tomeu Vizoso , Steven Price Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The midgard/bifrost GPUs need to allocate GPU heap memory which is allocated on GPU page faults and not pinned in memory. The vendor driver calls this functionality GROW_ON_GPF. This implementation assumes that BOs allocated with the PANFROST_BO_NOEXEC flag are never mmapped or exported. Both of those may actually work, but I'm unsure if there's some interaction there. It would cause the whole object to be pinned in memory which would defeat the point of this. On faults, we map in 2MB at a time in order to utilize huge pages (if enabled). Currently, once we've mapped pages in, they are only unmapped if the BO is freed. Once we add shrinker support, we can unmap pages with the shrinker. Cc: Tomeu Vizoso Cc: Boris Brezillon Cc: Robin Murphy Cc: Steven Price Cc: Alyssa Rosenzweig Signed-off-by: Rob Herring --- drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 14 ++- drivers/gpu/drm/panfrost/panfrost_gem.h | 8 ++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 114 +++++++++++++++++++++--- include/uapi/drm/panfrost_drm.h | 1 + 6 files changed, 125 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/panfrost/TODO b/drivers/gpu/drm/panfrost/TODO index c2e44add37d8..64129bf73933 100644 --- a/drivers/gpu/drm/panfrost/TODO +++ b/drivers/gpu/drm/panfrost/TODO @@ -14,8 +14,6 @@ The hard part is handling when more address spaces are needed than what the h/w provides. -- Support pinning pages on demand (GPU page faults). - - Support userspace controlled GPU virtual addresses. Needed for Vulkan. (Tomeu) - Support for madvise and a shrinker. diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index b91e991bc6a3..9e87d0060202 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -50,7 +50,7 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_panfrost_create_bo *args = data; if (!args->size || args->pad || - (args->flags & ~PANFROST_BO_NOEXEC)) + (args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP))) return -EINVAL; bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags, diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 37ffec8391da..528396000038 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -87,7 +87,10 @@ static int panfrost_gem_map(struct panfrost_device *pfdev, struct panfrost_gem_o if (ret) return ret; - return panfrost_mmu_map(bo); + if (!bo->is_heap) + ret = panfrost_mmu_map(bo); + + return ret; } struct panfrost_gem_object * @@ -101,7 +104,11 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; struct panfrost_gem_object *bo; - size = roundup(size, PAGE_SIZE); + /* Round up heap allocations to 2MB to keep fault handling simple */ + if (flags & PANFROST_BO_HEAP) + size = roundup(size, SZ_2M); + else + size = roundup(size, PAGE_SIZE); shmem = drm_gem_shmem_create_with_handle(file_priv, dev, size, handle); if (IS_ERR(shmem)) @@ -109,6 +116,9 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, bo = to_panfrost_bo(&shmem->base); bo->noexec = !!(flags & PANFROST_BO_NOEXEC); + bo->is_heap = !!(flags & PANFROST_BO_HEAP); + if (bo->is_heap) + bo->noexec = true; ret = panfrost_gem_map(pfdev, bo); if (ret) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 132f02399b7b..c500ca6b9072 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -13,6 +13,7 @@ struct panfrost_gem_object { struct drm_mm_node node; bool is_mapped :1; bool noexec :1; + bool is_heap :1; }; static inline @@ -21,6 +22,13 @@ struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj) return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base); } +static inline +struct panfrost_gem_object *drm_mm_node_to_panfrost_bo(struct drm_mm_node *node) +{ + return container_of(node, struct panfrost_gem_object, node); +} + + struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size); struct panfrost_gem_object * diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d18484a07bfa..3b95c7027188 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -3,6 +3,7 @@ /* Copyright (C) 2019 Arm Ltd. */ #include #include +#include #include #include #include @@ -10,6 +11,7 @@ #include #include #include +#include #include #include "panfrost_device.h" @@ -257,12 +259,12 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) size_t unmapped_page; size_t pgsize = get_pgsize(iova, len - unmapped_len); - unmapped_page = ops->unmap(ops, iova, pgsize); - if (!unmapped_page) - break; - - iova += unmapped_page; - unmapped_len += unmapped_page; + if (ops->iova_to_phys(ops, iova)) { + unmapped_page = ops->unmap(ops, iova, pgsize); + WARN_ON(unmapped_page != pgsize); + } + iova += pgsize; + unmapped_len += pgsize; } mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT, @@ -298,6 +300,86 @@ static const struct iommu_gather_ops mmu_tlb_ops = { .tlb_sync = mmu_tlb_sync_context, }; +static struct drm_mm_node *addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) +{ + struct drm_mm_node *node; + u64 offset = addr >> PAGE_SHIFT; + + drm_mm_for_each_node(node, &pfdev->mm) { + if (offset >= node->start && offset < (node->start + node->size)) + return node; + } + return NULL; +} + +#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE) + +int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) +{ + int ret, i; + struct drm_mm_node *node; + struct panfrost_gem_object *bo; + struct address_space *mapping; + pgoff_t page_offset; + struct sg_table sgt = {}; + struct page **pages; + + node = addr_to_drm_mm_node(pfdev, as, addr); + if (!node) + return -ENOENT; + + bo = drm_mm_node_to_panfrost_bo(node); + if (!bo->is_heap) { + dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", + node->start << PAGE_SHIFT); + return -EINVAL; + } + /* Assume 2MB alignment and size multiple */ + addr &= ~((u64)SZ_2M - 1); + page_offset = addr >> PAGE_SHIFT; + page_offset -= node->start; + + pages = kvmalloc_array(NUM_FAULT_PAGES, sizeof(struct page *), GFP_KERNEL); + if (!pages) + return -ENOMEM; + + mapping = bo->base.base.filp->f_mapping; + mapping_set_unevictable(mapping); + + for (i = 0; i < NUM_FAULT_PAGES; i++) { + pages[i] = shmem_read_mapping_page(mapping, page_offset + i); + if (IS_ERR(pages[i])) { + ret = PTR_ERR(pages[i]); + goto err_pages; + } + } + + ret = sg_alloc_table_from_pages(&sgt, pages, NUM_FAULT_PAGES, 0, + SZ_2M, GFP_KERNEL); + if (ret) + goto err_pages; + + if (dma_map_sg(pfdev->dev, sgt.sgl, sgt.nents, DMA_BIDIRECTIONAL) == 0) { + ret = -EINVAL; + goto err_map; + } + + mmu_map_sg(pfdev, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, &sgt); + + mmu_write(pfdev, MMU_INT_CLEAR, BIT(as)); + bo->is_mapped = true; + + dev_dbg(pfdev->dev, "mapped page fault @ %llx", addr); + + return 0; + +err_map: + sg_free_table(&sgt); +err_pages: + kvfree(pages); + return ret; +} + static const char *access_type_name(struct panfrost_device *pfdev, u32 fault_status) { @@ -323,13 +405,11 @@ static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) { struct panfrost_device *pfdev = data; u32 status = mmu_read(pfdev, MMU_INT_STAT); - int i; + int i, ret; if (!status) return IRQ_NONE; - dev_err(pfdev->dev, "mmu irq status=%x\n", status); - for (i = 0; status; i++) { u32 mask = BIT(i) | BIT(i + 16); u64 addr; @@ -350,6 +430,17 @@ static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) access_type = (fault_status >> 8) & 0x3; source_id = (fault_status >> 16); + /* Page fault only */ + if ((status & mask) == BIT(i)) { + WARN_ON(exception_type < 0xC1 || exception_type > 0xC4); + + ret = panfrost_mmu_map_fault_addr(pfdev, i, addr); + if (!ret) { + status &= ~mask; + continue; + } + } + /* terminal fault, print info about the fault */ dev_err(pfdev->dev, "Unhandled Page fault in AS%d at VA 0x%016llX\n" @@ -391,8 +482,9 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) if (irq <= 0) return -ENODEV; - err = devm_request_irq(pfdev->dev, irq, panfrost_mmu_irq_handler, - IRQF_SHARED, "mmu", pfdev); + err = devm_request_threaded_irq(pfdev->dev, irq, NULL, + panfrost_mmu_irq_handler, + IRQF_ONESHOT, "mmu", pfdev); if (err) { dev_err(pfdev->dev, "failed to request mmu irq"); diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index 17fb5d200f7a..9150dd75aad8 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -83,6 +83,7 @@ struct drm_panfrost_wait_bo { }; #define PANFROST_BO_NOEXEC 1 +#define PANFROST_BO_HEAP 2 /** * struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs.