From patchwork Fri Sep 25 16:18:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD355C4742E for ; Fri, 25 Sep 2020 16:19:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 792E62076B for ; Fri, 25 Sep 2020 16:19:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i2efdHgf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729473AbgIYQTx (ORCPT ); Fri, 25 Sep 2020 12:19:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729525AbgIYQTw (ORCPT ); Fri, 25 Sep 2020 12:19:52 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50235C0613CE for ; Fri, 25 Sep 2020 09:19:52 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id i17so1991637qvj.22 for ; Fri, 25 Sep 2020 09:19:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Xsqlditrsk6GVxN9jY99/azWQCZYdTfiLb7CYYf9Bt0=; b=i2efdHgf1Rm3w216zh6uAaPZtqy0VAWPgJ+HqOmx/Vq2fUVJGZJ6vjzyqbnsBFljzZ TcRPxU/39fjPTd+P89uk0GEBL6yN/S/IFQJ3lzlI0ZSAGIDJ/+D3fy4afzbqwmbc8lLM 0rHhpNamSZfUYGurwddwDole4pQO3tOFzLneIRHV9glMw9BhI/F8vyon/Oy8n+DFjwlx rGJcZMVVjfIiD6XBCZ/122BfP5r2atDJnjB5HdwJgbkwOhtkIFrT3z7Hf3h5inWatdKn nNdneX7M04HO2ERrld0myhawu7OxXzVJDhuj3Qg7BrTT6SfkGlEaL+1SFEk4EHOnzHYh jw8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Xsqlditrsk6GVxN9jY99/azWQCZYdTfiLb7CYYf9Bt0=; b=pNkGjscbdZe9I4DdcIRQq4VW+RPW99JB6g1ekbiqwrWNbpGl/I6+TCg3UP5NeZl0Ka +OI2Lxzo0W47wTnMoTz6ZM6z1ntzxMWWmUlWz9H4UuW9MyaYwp0qqyh5+Xmq26vECAHS iJU03sWFp8PUIZC3/iedlg+kLmWr4E67oRzaDLFSEshKc7o8kWxKMzJxhpIN5p3PD84n 9t4aZZEZTVbPTdqyQ7d3ym4Z1A943zgBynMcWCsdLuVjW4heXDtPuByKnWeAsyhIFsKG 3N3HeJlzcE7g6DFP5U3xFTV73kR1Nzgkb/nkXu6iEeqqMC+CCq+IASMo8Dye+ZPVoemD RXvA== X-Gm-Message-State: AOAM531gzwlgQaDMFklVdhCDxOdvojgtvvLPxbrmYwIrNnaVXHboGISn Oalz83SVXZ/7O1ckqnzEKVFUkkqMbecW5KBkMH216Cls1mZzcey/7DPDXFS8vzZFLidKoKsEKZF uV1rxWnfKTi2REhttAVL+lJCPYigO4xiqCEvTuAGuKXlIlxkumYFEykYG7yR9+g== X-Google-Smtp-Source: ABdhPJx6JUhVbGxbK2Xn6OCIywxc/kiGNGKxbKGZA3O3bYgvjeEIIF0E8PHR0eh3LamF1X3eWZejTdvK6P8= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a05:6214:84:: with SMTP id n4mr204994qvr.26.1601050791432; Fri, 25 Sep 2020 09:19:51 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:47 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-2-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 01/30 for 5.4] dma-direct: remove __dma_direct_free_pages From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Max Filippov Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream acaade1af3587132e7ea585f470a95261e14f60c commit. We can just call dma_free_contiguous directly instead of wrapping it. Signed-off-by: Christoph Hellwig Reviewed-by: Max Filippov Signed-off-by: Peter Gonda --- include/linux/dma-direct.h | 1 - kernel/dma/direct.c | 11 +++-------- kernel/dma/remap.c | 4 ++-- 3 files changed, 5 insertions(+), 11 deletions(-) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 6a18a97b76a8..02a418520062 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -76,6 +76,5 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs); struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); -void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page); int dma_direct_supported(struct device *dev, u64 mask); #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 0a093a675b63..86f580439c9c 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -154,7 +154,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - __dma_direct_free_pages(dev, size, page); + dma_free_contiguous(dev, page, size); return NULL; } @@ -176,11 +176,6 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, return ret; } -void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page) -{ - dma_free_contiguous(dev, page, size); -} - void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { @@ -189,7 +184,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ - __dma_direct_free_pages(dev, size, cpu_addr); + dma_free_contiguous(dev, cpu_addr, size); return; } @@ -199,7 +194,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && dma_alloc_need_uncached(dev, attrs)) cpu_addr = cached_kernel_address(cpu_addr); - __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); + dma_free_contiguous(dev, virt_to_page(cpu_addr), size); } void *dma_direct_alloc(struct device *dev, size_t size, diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index c00b9258fa6a..fb1e50c2d48a 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -238,7 +238,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); if (!ret) { - __dma_direct_free_pages(dev, size, page); + dma_free_contiguous(dev, page, size); return ret; } @@ -256,7 +256,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, struct page *page = pfn_to_page(__phys_to_pfn(phys)); vunmap(vaddr); - __dma_direct_free_pages(dev, size, page); + dma_free_contiguous(dev, page, size); } } From patchwork Fri Sep 25 16:18:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 265C7C4741F for ; Fri, 25 Sep 2020 16:19:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D35DD21D7A for ; Fri, 25 Sep 2020 16:19:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JlIQ4lL/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729517AbgIYQTz (ORCPT ); Fri, 25 Sep 2020 12:19:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729546AbgIYQTy (ORCPT ); Fri, 25 Sep 2020 12:19:54 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19DEAC0613CE for ; Fri, 25 Sep 2020 09:19:54 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id c197so2693998pfb.23 for ; Fri, 25 Sep 2020 09:19:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=jWMAFF+ZM67Ep7R4sbI6j2v0w5CXnZCbG0uQwudXMmg=; b=JlIQ4lL/7JkY7/7rFI2VeDAHbLc2Xg35CZzcDG5tU+iFBr3r9bWTjs+VTx2UNB+tkx WMgmpF3l9E6ViRLEkHe6A5/o3FUDNxWEGe1+mfWTjbpOx8KUApc39ElsJ4NdC5nIvfh2 99vFCEvukGHqqDyfU0kblpKxuIZtFgVP7TvgOCg7xgo2i/Mt4e5dOSxTOfntXLnTCTIu mxmJv9JBsnvl2unmZLDUvMdL38GhBzKgKh0wAUTHzdkCXnxyMq8Neihwf9BD0/9QTvl6 uf1lLTE1ezNR5QGcMrMLsjiepOrLz9Huv4ARp9rhIozCelrv4N7T4Z4pTJAIXm4KIvwP 2oIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jWMAFF+ZM67Ep7R4sbI6j2v0w5CXnZCbG0uQwudXMmg=; b=EKcRcnTq73twi+jtQ7FjhhmOY8ZBE5n0MVYFMdiVxKCe5fKY/YMzC1CYGnFdLyN7Vv WpHt4EaoARWbBDriyRjQAPjD81/mPC13UCqhiTn+N+H9+0ttDtWwTnL8HsnHNrS0/+Rf Iw7vFQ7OiDr9pD9XApOQ25gJFn+pkxtcXS3CTkP3mekAPMBTzsvrtDCFd0UjAQLnpQuy 7P242dZGP4ov0CT55bBLx3hEmWPqdIO7Kh5AkD33YeqF1ssv+xfHC2M/SidDl8Az/KRk nO5J4AXnXn0Mttyh+JYXRgeUOWT6zS80/QdKNPi0+EJ5YmFqHeXM3wGN39CVQkOkVB00 KFUg== X-Gm-Message-State: AOAM532CPokkuXQdnoWlWAj29wZRq8/+g/pErptrejKsrvOheWlBkAu4 Yjk2OwBddMbmRLyMmT5KTwrSOh5gFLVbzGEq/Rg+cStZwJiKECX1vfRFozoysw6ZlZxiSXOmJmq f931oYjepDmGWZt0ZDBDrpo93GOzLKUSgffSgpNUwOizLG34bgRobujLCCGJAEQ== X-Google-Smtp-Source: ABdhPJx9qPY7+VA2o+wh8ZoQ7XalRcTieHZSg88Gm7IjLCOcrIc3i/x4CW3dHNKu8LyHb94FJa+ELrTQZWE= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a05:6a00:2387:b029:142:2501:3970 with SMTP id f7-20020a056a002387b029014225013970mr20069pfc.53.1601050793311; Fri, 25 Sep 2020 09:19:53 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:48 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-3-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 02/30 for 5.4] dma-direct: remove the dma_handle argument to __dma_direct_alloc_pages From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Max Filippov Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream 4e1003aa56a7d60ddb048e43a7a51368fcfe36af commit. The argument isn't used anywhere, so stop passing it. Signed-off-by: Christoph Hellwig Reviewed-by: Max Filippov Signed-off-by: Peter Gonda --- include/linux/dma-direct.h | 2 +- kernel/dma/direct.c | 4 ++-- kernel/dma/remap.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 02a418520062..3238177e65ad 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -75,6 +75,6 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs); struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); + gfp_t gfp, unsigned long attrs); int dma_direct_supported(struct device *dev, u64 mask); #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 86f580439c9c..9621993bf2bb 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -84,7 +84,7 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) } struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) + gfp_t gfp, unsigned long attrs) { size_t alloc_size = PAGE_ALIGN(size); int node = dev_to_node(dev); @@ -132,7 +132,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, struct page *page; void *ret; - page = __dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); + page = __dma_direct_alloc_pages(dev, size, gfp, attrs); if (!page) return NULL; diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index fb1e50c2d48a..90d5ce77c189 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -226,7 +226,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, goto done; } - page = __dma_direct_alloc_pages(dev, size, dma_handle, flags, attrs); + page = __dma_direct_alloc_pages(dev, size, flags, attrs); if (!page) return NULL; From patchwork Fri Sep 25 16:18:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9135CC4742A for ; Fri, 25 Sep 2020 16:20:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5043822B2D for ; Fri, 25 Sep 2020 16:20:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uCUG8bHm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729554AbgIYQT7 (ORCPT ); Fri, 25 Sep 2020 12:19:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729546AbgIYQT4 (ORCPT ); Fri, 25 Sep 2020 12:19:56 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C5CCC0613CE for ; Fri, 25 Sep 2020 09:19:56 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id x10so3053168ybj.19 for ; Fri, 25 Sep 2020 09:19:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=+OXochZesudckftf8bTVzttvyRNJip0ves12q+6MEKE=; b=uCUG8bHmY2dN1WLG1xvj6FrIp4emhsdf2ChOxyxnc/4EmIM+bM8uEcJzuCFUGPpccT +d8dXNzwWcXzIu25J0ahG3Boq+Zq/3HYHBzuoazV+UsjJv29mogeFsHIJVgqCFuIIwyZ LQvo5NkLh9d4orhZ0Gd700Y8UuN2KwhJyrFsxxLVdWh0v8WwVA94LmmgygTFTSotLLi7 pEhGMORSpl3GYZBzkxQt57prSSVXNMounqPmCBOYtbKpC5ED7TlVvwOalSkKudq8lAbG QMWERm2561qsFdC7NeUSVc3ldQULOCPMgUE6Dgp8LEj7qijuIVENt1v/V+YgzQ8IjTee JRpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+OXochZesudckftf8bTVzttvyRNJip0ves12q+6MEKE=; b=gtrZhnKOH15V0C6uIzTmBFraHRQ3IB5QIxwA//Jw7SzKO5E/9Om4cX6zGRrjL+58qQ 3e87N+2AChbHxXuWCJrCtZPtml9yvIxdGKJ/0qTCaMXxKE9WUjd8AekYhQ+BD03TG34J Xm/d1hIvKRxJQtCXfTDzmjEN96H1aZVyU7cz0qqyUnDbHYxuTQ4nI4PsCQSY5CxsQ11c tva7twMXgIvwd/Ny2K0IDb8/Qt7L1Y6GY7SEM+dHKuui9bRq9UdAFX5nadKm5D1N1fVS JJdiGTlXepvhhEjfk9A2EaToVUCRmHD30ARqPkadDxY30QNe8cCWifSQ7RV9307qxyvc NmKg== X-Gm-Message-State: AOAM533/+b/TyWrUF60MycqXtXP8Pcc5UEedZLrscoqLTfezB+4wzWl5 ogkwkc3i7GpaKtJgy/WAcdSCqopdde2qb5kxIB8KWpXYf0PmS+iKpfpq7JwXgaJylPUovz/MVaO 0fkUt/MNvuD70PxkQ5xyEPfW1MfsEkJzC95mPhiLKPlRHCZsCcKiXRhwyUT0xsA== X-Google-Smtp-Source: ABdhPJxtNEVb3zFdOj0DcWP7J07EBn1FAGpUVaFi70IV7yb5c8QnO3WZDe71NIJ3OQxniFDvlI89A3KbgqI= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a25:bb01:: with SMTP id z1mr6751491ybg.387.1601050795253; Fri, 25 Sep 2020 09:19:55 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:49 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-4-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 03/30 for 5.4] dma-direct: provide mmap and get_sgtable method overrides From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Max Filippov Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream 34dc0ea6bc960f1f57b2148f01a3f4da23f87013 commit. For dma-direct we know that the DMA address is an encoding of the physical address that we can trivially decode. Use that fact to provide implementations that do not need the arch_dma_coherent_to_pfn architecture hook. Note that we still can only support mmap of non-coherent memory only if the architecture provides a way to set an uncached bit in the page tables. This must be true for architectures that use the generic remap helpers, but other architectures can also manually select it. Signed-off-by: Christoph Hellwig Reviewed-by: Max Filippov Signed-off-by: Peter Gonda --- arch/arc/Kconfig | 1 - arch/arm/Kconfig | 1 - arch/arm/mm/dma-mapping.c | 6 --- arch/arm64/Kconfig | 1 - arch/ia64/Kconfig | 2 +- arch/ia64/kernel/dma-mapping.c | 6 --- arch/microblaze/Kconfig | 1 - arch/mips/Kconfig | 4 +- arch/mips/mm/dma-noncoherent.c | 6 --- arch/powerpc/platforms/Kconfig.cputype | 1 - include/linux/dma-direct.h | 7 +++ include/linux/dma-noncoherent.h | 2 - kernel/dma/Kconfig | 12 ++++-- kernel/dma/direct.c | 59 ++++++++++++++++++++++++++ kernel/dma/mapping.c | 45 +++----------------- kernel/dma/remap.c | 6 --- 16 files changed, 85 insertions(+), 75 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 8383155c8c82..4d7b671c8ff4 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -6,7 +6,6 @@ config ARC def_bool y select ARC_TIMERS - select ARCH_HAS_DMA_COHERENT_TO_PFN select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 05c9bbfe444d..fac9999d6ef5 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -7,7 +7,6 @@ config ARM select ARCH_HAS_BINFMT_FLAT select ARCH_HAS_DEBUG_VIRTUAL if MMU select ARCH_HAS_DEVMEM_IS_ALLOWED - select ARCH_HAS_DMA_COHERENT_TO_PFN if SWIOTLB select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FORTIFY_SOURCE diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 27576c7b836e..58d5765fb129 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -2346,12 +2346,6 @@ void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, size, dir); } -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return dma_to_pfn(dev, dma_addr); -} - void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a0bc9bbb92f3..bc45a704987f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -12,7 +12,6 @@ config ARM64 select ARCH_CLOCKSOURCE_DATA select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED - select ARCH_HAS_DMA_COHERENT_TO_PFN select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI select ARCH_HAS_FAST_MULTIPLIER diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 16714477eef4..bab7cd878464 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -33,7 +33,7 @@ config IA64 select HAVE_ARCH_TRACEHOOK select HAVE_MEMBLOCK_NODE_MAP select HAVE_VIRT_CPU_ACCOUNTING - select ARCH_HAS_DMA_COHERENT_TO_PFN + select DMA_NONCOHERENT_MMAP select ARCH_HAS_SYNC_DMA_FOR_CPU select VIRT_TO_BUS select GENERIC_IRQ_PROBE diff --git a/arch/ia64/kernel/dma-mapping.c b/arch/ia64/kernel/dma-mapping.c index 4a3262795890..09ef9ce9988d 100644 --- a/arch/ia64/kernel/dma-mapping.c +++ b/arch/ia64/kernel/dma-mapping.c @@ -19,9 +19,3 @@ void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, { dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); } - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return page_to_pfn(virt_to_page(cpu_addr)); -} diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig index c9c4be822456..261c26df1c9f 100644 --- a/arch/microblaze/Kconfig +++ b/arch/microblaze/Kconfig @@ -4,7 +4,6 @@ config MICROBLAZE select ARCH_32BIT_OFF_T select ARCH_NO_SWAP select ARCH_HAS_BINFMT_FLAT if !MMU - select ARCH_HAS_DMA_COHERENT_TO_PFN if MMU select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_SYNC_DMA_FOR_CPU diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index e5c2d47608fe..c1c3da4fc667 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -1134,9 +1134,9 @@ config DMA_NONCOHERENT select ARCH_HAS_DMA_WRITE_COMBINE select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_HAS_UNCACHED_SEGMENT - select NEED_DMA_MAP_STATE - select ARCH_HAS_DMA_COHERENT_TO_PFN + select DMA_NONCOHERENT_MMAP select DMA_NONCOHERENT_CACHE_SYNC + select NEED_DMA_MAP_STATE config SYS_HAS_EARLY_PRINTK bool diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c index 1d4d57dd9acf..fcf6d3eaac66 100644 --- a/arch/mips/mm/dma-noncoherent.c +++ b/arch/mips/mm/dma-noncoherent.c @@ -59,12 +59,6 @@ void *cached_kernel_address(void *addr) return __va(addr) - UNCAC_BASE; } -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return page_to_pfn(virt_to_page(cached_kernel_address(cpu_addr))); -} - static inline void dma_sync_virt(void *addr, size_t size, enum dma_data_direction dir) { diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index f0330ce498d1..97af19141aed 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -459,7 +459,6 @@ config NOT_COHERENT_CACHE bool depends on 4xx || PPC_8xx || E200 || PPC_MPC512x || \ GAMECUBE_COMMON || AMIGAONE - select ARCH_HAS_DMA_COHERENT_TO_PFN select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_HAS_SYNC_DMA_FOR_CPU diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 3238177e65ad..6db863c3eb93 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -76,5 +76,12 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs); struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp, unsigned long attrs); +int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs); +bool dma_direct_can_mmap(struct device *dev); +int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs); int dma_direct_supported(struct device *dev, u64 mask); #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index dd3de6d88fc0..e30fca1f1b12 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -41,8 +41,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs); -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr); #ifdef CONFIG_MMU /* diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 73c5c2b8e824..4c103a24e380 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -51,9 +51,6 @@ config ARCH_HAS_SYNC_DMA_FOR_CPU_ALL config ARCH_HAS_DMA_PREP_COHERENT bool -config ARCH_HAS_DMA_COHERENT_TO_PFN - bool - config ARCH_HAS_FORCE_DMA_UNENCRYPTED bool @@ -68,9 +65,18 @@ config SWIOTLB bool select NEED_DMA_MAP_STATE +# +# Should be selected if we can mmap non-coherent mappings to userspace. +# The only thing that is really required is a way to set an uncached bit +# in the pagetables +# +config DMA_NONCOHERENT_MMAP + bool + config DMA_REMAP depends on MMU select GENERIC_ALLOCATOR + select DMA_NONCOHERENT_MMAP bool config DMA_DIRECT_REMAP diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 9621993bf2bb..76c722bc9e0c 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -43,6 +43,12 @@ static inline dma_addr_t phys_to_dma_direct(struct device *dev, return phys_to_dma(dev, phys); } +static inline struct page *dma_direct_to_page(struct device *dev, + dma_addr_t dma_addr) +{ + return pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_addr))); +} + u64 dma_direct_get_required_mask(struct device *dev) { phys_addr_t phys = (phys_addr_t)(max_pfn - 1) << PAGE_SHIFT; @@ -380,6 +386,59 @@ dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, } EXPORT_SYMBOL(dma_direct_map_resource); +int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + struct page *page = dma_direct_to_page(dev, dma_addr); + int ret; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (!ret) + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + return ret; +} + +#ifdef CONFIG_MMU +bool dma_direct_can_mmap(struct device *dev) +{ + return dev_is_dma_coherent(dev) || + IS_ENABLED(CONFIG_DMA_NONCOHERENT_MMAP); +} + +int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long pfn = PHYS_PFN(dma_to_phys(dev, dma_addr)); + int ret = -ENXIO; + + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); + + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) + return ret; + + if (vma->vm_pgoff >= count || user_count > count - vma->vm_pgoff) + return -ENXIO; + return remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff, + user_count << PAGE_SHIFT, vma->vm_page_prot); +} +#else /* CONFIG_MMU */ +bool dma_direct_can_mmap(struct device *dev) +{ + return false; +} + +int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + return -ENXIO; +} +#endif /* CONFIG_MMU */ + /* * Because 32-bit DMA masks are so common we expect every architecture to be * able to satisfy them - either by not supporting more physical memory, or by diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 8682a5305cb3..98e3d873792e 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -112,24 +112,9 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page; + struct page *page = virt_to_page(cpu_addr); int ret; - if (!dev_is_dma_coherent(dev)) { - unsigned long pfn; - - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN)) - return -ENXIO; - - /* If the PFN is not valid, we do not have a struct page */ - pfn = arch_dma_coherent_to_pfn(dev, cpu_addr, dma_addr); - if (!pfn_valid(pfn)) - return -ENXIO; - page = pfn_to_page(pfn); - } else { - page = virt_to_page(cpu_addr); - } - ret = sg_alloc_table(sgt, 1, GFP_KERNEL); if (!ret) sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); @@ -154,7 +139,7 @@ int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, const struct dma_map_ops *ops = get_dma_ops(dev); if (dma_is_direct(ops)) - return dma_common_get_sgtable(dev, sgt, cpu_addr, dma_addr, + return dma_direct_get_sgtable(dev, sgt, cpu_addr, dma_addr, size, attrs); if (!ops->get_sgtable) return -ENXIO; @@ -194,7 +179,6 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; - unsigned long pfn; int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -205,19 +189,8 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, if (off >= count || user_count > count - off) return -ENXIO; - if (!dev_is_dma_coherent(dev)) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN)) - return -ENXIO; - - /* If the PFN is not valid, we do not have a struct page */ - pfn = arch_dma_coherent_to_pfn(dev, cpu_addr, dma_addr); - if (!pfn_valid(pfn)) - return -ENXIO; - } else { - pfn = page_to_pfn(virt_to_page(cpu_addr)); - } - - return remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff, + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; @@ -235,12 +208,8 @@ bool dma_can_mmap(struct device *dev) { const struct dma_map_ops *ops = get_dma_ops(dev); - if (dma_is_direct(ops)) { - return IS_ENABLED(CONFIG_MMU) && - (dev_is_dma_coherent(dev) || - IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN)); - } - + if (dma_is_direct(ops)) + return dma_direct_can_mmap(dev); return ops->mmap != NULL; } EXPORT_SYMBOL_GPL(dma_can_mmap); @@ -265,7 +234,7 @@ int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, const struct dma_map_ops *ops = get_dma_ops(dev); if (dma_is_direct(ops)) - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, + return dma_direct_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); if (!ops->mmap) return -ENXIO; diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 90d5ce77c189..3c49499ee6b0 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -259,10 +259,4 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_free_contiguous(dev, page, size); } } - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_to_phys(dev, dma_addr)); -} #endif /* CONFIG_DMA_DIRECT_REMAP */ From patchwork Fri Sep 25 16:18:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C5BAC4742C for ; Fri, 25 Sep 2020 16:20:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0BC52076B for ; Fri, 25 Sep 2020 16:19:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DUGofUZV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729487AbgIYQT6 (ORCPT ); Fri, 25 Sep 2020 12:19:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729554AbgIYQT5 (ORCPT ); Fri, 25 Sep 2020 12:19:57 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5EE7C0613D3 for ; Fri, 25 Sep 2020 09:19:57 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 82so2666584pgf.16 for ; Fri, 25 Sep 2020 09:19:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=6xRrIbPx9uiGFJtbKQdD72ZI9i388/g7ufwhJBnfD8A=; b=DUGofUZVkCZixpIdZU+OLDNxDc7WWNxuk4zq0J8MbXh8EV7DAIAPogHy+H7ew3xreM hJADDakJtZpulOlyGgSvlj8Qg0Pbue9Yea9VUltJZBAW+4HD+qMN6VM4tn1zl1aeXlCN n2O+ENo7ROFZIw3JEN35ONFD8H+9ErpRKHbk9N7mBU7zkQSpmBUoEzJ2vqgCwb0UCGL5 bcGKl6oiiXYPw40DGlehRQacEZ2TJdyWs50pF8JeGy52CQk0jyqBdYIEAiaZkZHg2tkP i6lh29UWxnrl9bBfSKFg0nJCdUK0WjMCY0vys7FwlGq67Z7a/M9W9Z4Hj6E8Cj3LnLBe NVbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6xRrIbPx9uiGFJtbKQdD72ZI9i388/g7ufwhJBnfD8A=; b=sI2TI+7qQQtPaJcvvKhP0zpYmqlChjILwKnAPrz31yL9lKkEvAEuC2UCW8wqYEMi/J Zd38a9PJpknfJYxWgVhNhYPuGaYldejD+1HdU8EJNGf0bBznNy/c5mTK2mnWyiWWkO83 8WsC9VQEcbanxjso7Ovtd0F1L97kOiy7QwXAaMYO3+eBLo+8BRxRGmxeqVUPGcDyr5WG RBEb6RyWUDe7CJSu0u7xDWAwVRB7+13y2QRMemAV0gPHoe6QfGFdg4zcoQQqDobND6jm MQkBdboxJhGZll0KLJPttC5jIzI45dMf6fnwKhE4dr8L9mEfoGctDpoZJ2o24ZkhJv07 psbA== X-Gm-Message-State: AOAM530sKA5EPFc0tu4tAXwwqdJH8AFY6lw2XM9MwBofzSvcers8z9qG +Ditj3tw9dP65JkVMToIvWqP1bsao1vpc7S5y4mJE9AIBkst/c0DwpPuZ7A4NDQmvXdI7eFd4T9 qKB5FKscWZkIPQGayhahOjdPt1diekzUNgokR5454zgmd9LgaPNveA4waZXE8lA== X-Google-Smtp-Source: ABdhPJzc1x0JvoJK/KU8BOxOHn35GmA89Gkvx94QL85vYxPUdq41sKCSKx8+2CUHIxISM03MTNMnBrBtL0A= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:aa7:9f99:0:b029:13e:d13d:a134 with SMTP id z25-20020aa79f990000b029013ed13da134mr73921pfr.28.1601050797134; Fri, 25 Sep 2020 09:19:57 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:50 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-5-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 04/30 for 5.4] dma-mapping: merge the generic remapping helpers into dma-direct From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Max Filippov Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream 3acac065508f6cc60ac9d3e4b7c6cc37fd91d531 commit. Integrate the generic dma remapping implementation into the main flow. This prepares for architectures like xtensa that use an uncached segment for pages in the kernel mapping, but can also remap highmem from CMA. To simplify that implementation we now always deduct the page from the physical address via the DMA address instead of the virtual address. Signed-off-by: Christoph Hellwig Reviewed-by: Max Filippov Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 60 ++++++++++++++++++++++++++++++++++++--------- kernel/dma/remap.c | 49 ------------------------------------ 2 files changed, 48 insertions(+), 61 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 76c722bc9e0c..d30c5468a91a 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -138,6 +139,15 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, struct page *page; void *ret; + if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + dma_alloc_need_uncached(dev, attrs) && + !gfpflags_allow_blocking(gfp)) { + ret = dma_alloc_from_pool(PAGE_ALIGN(size), &page, gfp); + if (!ret) + return NULL; + goto done; + } + page = __dma_direct_alloc_pages(dev, size, gfp, attrs); if (!page) return NULL; @@ -147,9 +157,28 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, /* remove any dirty cache lines on the kernel alias */ if (!PageHighMem(page)) arch_dma_prep_coherent(page, size); - *dma_handle = phys_to_dma(dev, page_to_phys(page)); /* return the page pointer as the opaque cookie */ - return page; + ret = page; + goto done; + } + + if ((IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + dma_alloc_need_uncached(dev, attrs)) || + (IS_ENABLED(CONFIG_DMA_REMAP) && PageHighMem(page))) { + /* remove any dirty cache lines on the kernel alias */ + arch_dma_prep_coherent(page, PAGE_ALIGN(size)); + + /* create a coherent mapping */ + ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), + dma_pgprot(dev, PAGE_KERNEL, attrs), + __builtin_return_address(0)); + if (!ret) { + dma_free_contiguous(dev, page, size); + return ret; + } + + memset(ret, 0, size); + goto done; } if (PageHighMem(page)) { @@ -165,12 +194,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, } ret = page_address(page); - if (force_dma_unencrypted(dev)) { + if (force_dma_unencrypted(dev)) set_memory_decrypted((unsigned long)ret, 1 << get_order(size)); - *dma_handle = __phys_to_dma(dev, page_to_phys(page)); - } else { - *dma_handle = phys_to_dma(dev, page_to_phys(page)); - } + memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && @@ -178,7 +204,11 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, arch_dma_prep_coherent(page, size); ret = uncached_kernel_address(ret); } - +done: + if (force_dma_unencrypted(dev)) + *dma_handle = __phys_to_dma(dev, page_to_phys(page)); + else + *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; } @@ -194,19 +224,24 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, return; } + if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + dma_free_from_pool(cpu_addr, PAGE_ALIGN(size))) + return; + if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - dma_alloc_need_uncached(dev, attrs)) - cpu_addr = cached_kernel_address(cpu_addr); - dma_free_contiguous(dev, virt_to_page(cpu_addr), size); + if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) + vunmap(cpu_addr); + + dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); @@ -216,6 +251,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 3c49499ee6b0..d47bd40fc0f5 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -210,53 +210,4 @@ bool dma_free_from_pool(void *start, size_t size) gen_pool_free(atomic_pool, (unsigned long)start, size); return true; } - -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t flags, unsigned long attrs) -{ - struct page *page = NULL; - void *ret; - - size = PAGE_ALIGN(size); - - if (!gfpflags_allow_blocking(flags)) { - ret = dma_alloc_from_pool(size, &page, flags); - if (!ret) - return NULL; - goto done; - } - - page = __dma_direct_alloc_pages(dev, size, flags, attrs); - if (!page) - return NULL; - - /* remove any dirty cache lines on the kernel alias */ - arch_dma_prep_coherent(page, size); - - /* create a coherent mapping */ - ret = dma_common_contiguous_remap(page, size, - dma_pgprot(dev, PAGE_KERNEL, attrs), - __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - - memset(ret, 0, size); -done: - *dma_handle = phys_to_dma(dev, page_to_phys(page)); - return ret; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { - phys_addr_t phys = dma_to_phys(dev, dma_handle); - struct page *page = pfn_to_page(__phys_to_pfn(phys)); - - vunmap(vaddr); - dma_free_contiguous(dev, page, size); - } -} #endif /* CONFIG_DMA_DIRECT_REMAP */ From patchwork Fri Sep 25 16:18:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2498C47430 for ; Fri, 25 Sep 2020 16:20:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80CFE20809 for ; Fri, 25 Sep 2020 16:20:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="irF5Pimi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729546AbgIYQUA (ORCPT ); Fri, 25 Sep 2020 12:20:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729556AbgIYQT7 (ORCPT ); Fri, 25 Sep 2020 12:19:59 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55A18C0613CE for ; Fri, 25 Sep 2020 09:19:59 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r4so2661533pgl.20 for ; Fri, 25 Sep 2020 09:19:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=pkrLIyZkkfAlj0b7XRwANO1F7a741pqc7PrjWyrMojY=; b=irF5PimiZVHOeT2ACNrEl666hNjfNJFQSVQiBZFsLI0wT30i66MmSi7WaGjSBZh5+U RHQy709yO3uJk3vOHf8qhsOweSJihiILg0DVXnutWZfxVfEdwyVKm/nrmpHn2kix4x66 iZA090E+pUhXFP10J5rXeDLGB2PVyn9dkWp1r5XCY4EVmPKYIff5ENfTsXkOsM5nb3VF MVS5+w7N27DFSc33Gzi3eOJ9GiLmsDsvxqLGh+6AYyrvzRfqQre90VbKCAq0HLqtGu58 YsHNO5XtNJgOWEGcu5Bgp/qzTArd/Vn2j0qVQUEHUa2NjpQwW1mt+juwkHTc7ffzo/ET conQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pkrLIyZkkfAlj0b7XRwANO1F7a741pqc7PrjWyrMojY=; b=Db0e8LI+MFHA4WmOqCp9i2z23KNu1j5Ioce2bw/PsQOeeqKt3ouY4cqyjGA8EFHsRL Zz3HKPP8iKofCoTo8USbswUhJieuiV00GygIhCVKUql2kXpwnYn+qWCDPd0jTnfW9ZwK 35Z1IHs8Sa0Za5/KxhH81HpfSkhOiQsRK+8zHPyFo8TpNSztfSr8/42B3JAUELvM/Ias tgclPAnja2n+qDM47wuwENCrxb0tDCPsveKQrEF5hAOFlQUtKIMtgerTP76enHevgEvr 2fVFMHFiyaGp5ErZRsJNUFXWPUEZfIIPtY4pQwt1G4qWva0CMj4/MgJhxzsdFzDjDo3r 0V/A== X-Gm-Message-State: AOAM532FUq3cRna6tH9wk9iRzsO4hmAmGKlrqyYR+cf+Mlir2RePly4S Kaw5elnJ5/N23KQZAK0oLAtGxMebiDzzNsxUcKb02i4/YGQIRqIu19SVXm5SRTfyV2iRq7kTq1x JX/8J+rSZtXP8n/UeWG3q+10M7gHLE3HZ77vGrDv/0iM/qBj15MWe5ac8CsT9pA== X-Google-Smtp-Source: ABdhPJzCDZthIQPMkTNog9PVNt3HCrmPImSVaHcbjLUws2d8S3UsCr32Xg7e6gvB4C6dz943UvEnM9aQt4I= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:902:8b81:b029:d2:42a6:ba2 with SMTP id ay1-20020a1709028b81b02900d242a60ba2mr183915plb.24.1601050798703; Fri, 25 Sep 2020 09:19:58 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:51 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-6-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 05/30 for 5.4] lib/genalloc.c: rename addr_in_gen_pool to gen_pool_has_addr From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Huang Shijie , Andrew Morton , Russell King , Arnd Bergmann , Greg Kroah-Hartman , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Stephen Rothwell , Linus Torvalds Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Huang Shijie upstream 964975ac6677c97ae61ec9d6969dd5d03f18d1c3 commit. Follow the kernel conventions, rename addr_in_gen_pool to gen_pool_has_addr. [sjhuang@iluvatar.ai: fix Documentation/ too] Link: http://lkml.kernel.org/r/20181229015914.5573-1-sjhuang@iluvatar.ai Link: http://lkml.kernel.org/r/20181228083950.20398-1-sjhuang@iluvatar.ai Signed-off-by: Huang Shijie Reviewed-by: Andrew Morton Cc: Russell King Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: Stephen Rothwell Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Peter Gonda --- Documentation/core-api/genalloc.rst | 2 +- arch/arm/mm/dma-mapping.c | 2 +- drivers/misc/sram-exec.c | 2 +- include/linux/genalloc.h | 2 +- kernel/dma/remap.c | 2 +- lib/genalloc.c | 5 +++-- 6 files changed, 8 insertions(+), 7 deletions(-) diff --git a/Documentation/core-api/genalloc.rst b/Documentation/core-api/genalloc.rst index 6b38a39fab24..a534cc7ebd05 100644 --- a/Documentation/core-api/genalloc.rst +++ b/Documentation/core-api/genalloc.rst @@ -129,7 +129,7 @@ writing of special-purpose memory allocators in the future. :functions: gen_pool_for_each_chunk .. kernel-doc:: lib/genalloc.c - :functions: addr_in_gen_pool + :functions: gen_pool_has_addr .. kernel-doc:: lib/genalloc.c :functions: gen_pool_avail diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 58d5765fb129..84ecbaefb9cf 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -529,7 +529,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page) static bool __in_atomic_pool(void *start, size_t size) { - return addr_in_gen_pool(atomic_pool, (unsigned long)start, size); + return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); } static int __free_from_pool(void *start, size_t size) diff --git a/drivers/misc/sram-exec.c b/drivers/misc/sram-exec.c index 426ad912b441..d054e2842a5f 100644 --- a/drivers/misc/sram-exec.c +++ b/drivers/misc/sram-exec.c @@ -96,7 +96,7 @@ void *sram_exec_copy(struct gen_pool *pool, void *dst, void *src, if (!part) return NULL; - if (!addr_in_gen_pool(pool, (unsigned long)dst, size)) + if (!gen_pool_has_addr(pool, (unsigned long)dst, size)) return NULL; base = (unsigned long)part->base; diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 4bd583bd6934..5b14a0f38124 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -206,7 +206,7 @@ extern struct gen_pool *devm_gen_pool_create(struct device *dev, int min_alloc_order, int nid, const char *name); extern struct gen_pool *gen_pool_get(struct device *dev, const char *name); -bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start, +extern bool gen_pool_has_addr(struct gen_pool *pool, unsigned long start, size_t size); #ifdef CONFIG_OF diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index d47bd40fc0f5..d14cbc83986a 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -178,7 +178,7 @@ bool dma_in_atomic_pool(void *start, size_t size) if (unlikely(!atomic_pool)) return false; - return addr_in_gen_pool(atomic_pool, (unsigned long)start, size); + return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); } void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) diff --git a/lib/genalloc.c b/lib/genalloc.c index 9fc31292cfa1..e43d6107fd62 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -540,7 +540,7 @@ void gen_pool_for_each_chunk(struct gen_pool *pool, EXPORT_SYMBOL(gen_pool_for_each_chunk); /** - * addr_in_gen_pool - checks if an address falls within the range of a pool + * gen_pool_has_addr - checks if an address falls within the range of a pool * @pool: the generic memory pool * @start: start address * @size: size of the region @@ -548,7 +548,7 @@ EXPORT_SYMBOL(gen_pool_for_each_chunk); * Check if the range of addresses falls within the specified pool. Returns * true if the entire range is contained in the pool and false otherwise. */ -bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start, +bool gen_pool_has_addr(struct gen_pool *pool, unsigned long start, size_t size) { bool found = false; @@ -567,6 +567,7 @@ bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start, rcu_read_unlock(); return found; } +EXPORT_SYMBOL(gen_pool_has_addr); /** * gen_pool_avail - get available free space of the pool From patchwork Fri Sep 25 16:18:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D028C4727D for ; Fri, 25 Sep 2020 16:20:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF90C20809 for ; Fri, 25 Sep 2020 16:20:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="R0pyNQRI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729550AbgIYQUC (ORCPT ); Fri, 25 Sep 2020 12:20:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729383AbgIYQUC (ORCPT ); Fri, 25 Sep 2020 12:20:02 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00530C0613CE for ; Fri, 25 Sep 2020 09:20:01 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id k17so2660468pgb.7 for ; Fri, 25 Sep 2020 09:20:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=TkqoSSzKMF5MExKtvfD1k7y0mbj5PDKEIWdm95DWQTw=; b=R0pyNQRI5xiGpyHEWBPjy7RdOFWHcKlBV6P4baa2hO04LTGg+/rWGh27YLDWng4qyc 94Fn7ZbO2o3HkjrLcA9TgxPUIx+PWYcJT2vCjti4+q7HId7ZYRKWBgZwjNwhpTRpIciP jpKrig3dp3obgaXCdXuZUQvEmFts7QP+ZUwc8Zw1Z3YWpAwjLcMt7nYAeICq/kz9dQ3h RkW8B+d3Hj0xb53CvKA0/Se4Nevcc5DkX+xJtCl1O9KzJWo8ipFGFGf+ovqVfz68byrz Nr9Z8psGPzrYVY1aOcAr0LMLg9RvgdAa4os/+0CZyt0mjS8HYFgYgD/I8lVVDWAxQ8iU 1vTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TkqoSSzKMF5MExKtvfD1k7y0mbj5PDKEIWdm95DWQTw=; b=tMQoz88h2bkjL+vlSXaOWb1rr8zYqbtj6koEJn54Y8xdsg8mpcTCcqIQ/nzNO0Id+e rQMjsuHnQK3fwBb2AnMGAJKAkToLpFRw+Qcjoz55uTXcF+3bq78tRyDbU9aTNQG7m7qK VjWJ869uKCZFdX7fG6vFo3Fhcd+0cuqDez46Fbwsge9N9BQ5ZrN8mjSSkRvA7rsDxBnN 6fafXmL7OOssNLgmHvIR3w5E6CCQ9stnpqGowBZbqVFwID4iFPTS5aLnpifDt1eP7DWb p6ZDTIMT4KTrPtd7ctyJ5pycnkF3a1cQq+wm1Y5uay619FaQVZfU7p/ACdQYQ9z6QHDF KirA== X-Gm-Message-State: AOAM533ujYh/IxoG1bCLspWmm0GusGeY1aBetjUdHT5CJbG3lNZnBBBf fa3+tuaNPNi+xsEgPJPr/ZbPvNo39KAC06/kh4jRLmBN8FW/Uk5nCpjmNH83c6ceWWxXr3YBRDC VuwwrBMqzTPFBJQsaD2b5r2vJK8hFLu9THcT60RLVpy7Ng1kvGjQFov/nrPazOw== X-Google-Smtp-Source: ABdhPJxn5VlzlvhbMF+jl4jGlB+BF2wfTyb2xaig4cb7yDNmNpZPLFOgehQBmv1ALbwF0WL2m9DGAmTn5LQ= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a05:6a00:15c1:b029:13e:d13d:a07a with SMTP id o1-20020a056a0015c1b029013ed13da07amr37039pfu.17.1601050801182; Fri, 25 Sep 2020 09:20:01 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:52 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-7-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 06/30 for 5.4] dma-remap: separate DMA atomic pools from direct remap code From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , David Rientjes Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream e860c299ac0d738b44ff91693f11e63080a29698 commit. DMA atomic pools will be needed beyond only CONFIG_DMA_DIRECT_REMAP so separate them out into their own file. This also adds a new Kconfig option that can be subsequently used for options, such as CONFIG_AMD_MEM_ENCRYPT, that will utilize the coherent pools but do not have a dependency on direct remapping. For this patch alone, there is no functional change introduced. Reviewed-by: Christoph Hellwig Signed-off-by: David Rientjes [hch: fixup copyrights and remove unused includes] Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/Kconfig | 6 ++- kernel/dma/Makefile | 1 + kernel/dma/pool.c | 123 ++++++++++++++++++++++++++++++++++++++++++++ kernel/dma/remap.c | 121 +------------------------------------------ 4 files changed, 130 insertions(+), 121 deletions(-) create mode 100644 kernel/dma/pool.c diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..d006668c0027 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -79,10 +79,14 @@ config DMA_REMAP select DMA_NONCOHERENT_MMAP bool -config DMA_DIRECT_REMAP +config DMA_COHERENT_POOL bool select DMA_REMAP +config DMA_DIRECT_REMAP + bool + select DMA_COHERENT_POOL + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/kernel/dma/Makefile b/kernel/dma/Makefile index d237cf3dc181..370f63344e9c 100644 --- a/kernel/dma/Makefile +++ b/kernel/dma/Makefile @@ -6,4 +6,5 @@ obj-$(CONFIG_DMA_DECLARE_COHERENT) += coherent.o obj-$(CONFIG_DMA_VIRT_OPS) += virt.o obj-$(CONFIG_DMA_API_DEBUG) += debug.o obj-$(CONFIG_SWIOTLB) += swiotlb.o +obj-$(CONFIG_DMA_COHERENT_POOL) += pool.o obj-$(CONFIG_DMA_REMAP) += remap.o diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c new file mode 100644 index 000000000000..3df5d9d39922 --- /dev/null +++ b/kernel/dma/pool.c @@ -0,0 +1,123 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2012 ARM Ltd. + * Copyright (C) 2020 Google LLC + */ +#include +#include +#include +#include +#include +#include + +static struct gen_pool *atomic_pool __ro_after_init; + +#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K +static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE; + +static int __init early_coherent_pool(char *p) +{ + atomic_pool_size = memparse(p, &p); + return 0; +} +early_param("coherent_pool", early_coherent_pool); + +static gfp_t dma_atomic_pool_gfp(void) +{ + if (IS_ENABLED(CONFIG_ZONE_DMA)) + return GFP_DMA; + if (IS_ENABLED(CONFIG_ZONE_DMA32)) + return GFP_DMA32; + return GFP_KERNEL; +} + +static int __init dma_atomic_pool_init(void) +{ + unsigned int pool_size_order = get_order(atomic_pool_size); + unsigned long nr_pages = atomic_pool_size >> PAGE_SHIFT; + struct page *page; + void *addr; + int ret; + + if (dev_get_cma_area(NULL)) + page = dma_alloc_from_contiguous(NULL, nr_pages, + pool_size_order, false); + else + page = alloc_pages(dma_atomic_pool_gfp(), pool_size_order); + if (!page) + goto out; + + arch_dma_prep_coherent(page, atomic_pool_size); + + atomic_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!atomic_pool) + goto free_page; + + addr = dma_common_contiguous_remap(page, atomic_pool_size, + pgprot_dmacoherent(PAGE_KERNEL), + __builtin_return_address(0)); + if (!addr) + goto destroy_genpool; + + ret = gen_pool_add_virt(atomic_pool, (unsigned long)addr, + page_to_phys(page), atomic_pool_size, -1); + if (ret) + goto remove_mapping; + gen_pool_set_algo(atomic_pool, gen_pool_first_fit_order_align, NULL); + + pr_info("DMA: preallocated %zu KiB pool for atomic allocations\n", + atomic_pool_size / 1024); + return 0; + +remove_mapping: + dma_common_free_remap(addr, atomic_pool_size); +destroy_genpool: + gen_pool_destroy(atomic_pool); + atomic_pool = NULL; +free_page: + if (!dma_release_from_contiguous(NULL, page, nr_pages)) + __free_pages(page, pool_size_order); +out: + pr_err("DMA: failed to allocate %zu KiB pool for atomic coherent allocation\n", + atomic_pool_size / 1024); + return -ENOMEM; +} +postcore_initcall(dma_atomic_pool_init); + +bool dma_in_atomic_pool(void *start, size_t size) +{ + if (unlikely(!atomic_pool)) + return false; + + return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); +} + +void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) +{ + unsigned long val; + void *ptr = NULL; + + if (!atomic_pool) { + WARN(1, "coherent pool not initialised!\n"); + return NULL; + } + + val = gen_pool_alloc(atomic_pool, size); + if (val) { + phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val); + + *ret_page = pfn_to_page(__phys_to_pfn(phys)); + ptr = (void *)val; + memset(ptr, 0, size); + } + + return ptr; +} + +bool dma_free_from_pool(void *start, size_t size) +{ + if (!dma_in_atomic_pool(start, size)) + return false; + gen_pool_free(atomic_pool, (unsigned long)start, size); + return true; +} diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index d14cbc83986a..f7b402849891 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -1,13 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Copyright (C) 2012 ARM Ltd. * Copyright (c) 2014 The Linux Foundation */ -#include -#include -#include -#include -#include +#include #include #include @@ -97,117 +92,3 @@ void dma_common_free_remap(void *cpu_addr, size_t size) unmap_kernel_range((unsigned long)cpu_addr, PAGE_ALIGN(size)); vunmap(cpu_addr); } - -#ifdef CONFIG_DMA_DIRECT_REMAP -static struct gen_pool *atomic_pool __ro_after_init; - -#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K -static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE; - -static int __init early_coherent_pool(char *p) -{ - atomic_pool_size = memparse(p, &p); - return 0; -} -early_param("coherent_pool", early_coherent_pool); - -static gfp_t dma_atomic_pool_gfp(void) -{ - if (IS_ENABLED(CONFIG_ZONE_DMA)) - return GFP_DMA; - if (IS_ENABLED(CONFIG_ZONE_DMA32)) - return GFP_DMA32; - return GFP_KERNEL; -} - -static int __init dma_atomic_pool_init(void) -{ - unsigned int pool_size_order = get_order(atomic_pool_size); - unsigned long nr_pages = atomic_pool_size >> PAGE_SHIFT; - struct page *page; - void *addr; - int ret; - - if (dev_get_cma_area(NULL)) - page = dma_alloc_from_contiguous(NULL, nr_pages, - pool_size_order, false); - else - page = alloc_pages(dma_atomic_pool_gfp(), pool_size_order); - if (!page) - goto out; - - arch_dma_prep_coherent(page, atomic_pool_size); - - atomic_pool = gen_pool_create(PAGE_SHIFT, -1); - if (!atomic_pool) - goto free_page; - - addr = dma_common_contiguous_remap(page, atomic_pool_size, - pgprot_dmacoherent(PAGE_KERNEL), - __builtin_return_address(0)); - if (!addr) - goto destroy_genpool; - - ret = gen_pool_add_virt(atomic_pool, (unsigned long)addr, - page_to_phys(page), atomic_pool_size, -1); - if (ret) - goto remove_mapping; - gen_pool_set_algo(atomic_pool, gen_pool_first_fit_order_align, NULL); - - pr_info("DMA: preallocated %zu KiB pool for atomic allocations\n", - atomic_pool_size / 1024); - return 0; - -remove_mapping: - dma_common_free_remap(addr, atomic_pool_size); -destroy_genpool: - gen_pool_destroy(atomic_pool); - atomic_pool = NULL; -free_page: - if (!dma_release_from_contiguous(NULL, page, nr_pages)) - __free_pages(page, pool_size_order); -out: - pr_err("DMA: failed to allocate %zu KiB pool for atomic coherent allocation\n", - atomic_pool_size / 1024); - return -ENOMEM; -} -postcore_initcall(dma_atomic_pool_init); - -bool dma_in_atomic_pool(void *start, size_t size) -{ - if (unlikely(!atomic_pool)) - return false; - - return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); -} - -void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) -{ - unsigned long val; - void *ptr = NULL; - - if (!atomic_pool) { - WARN(1, "coherent pool not initialised!\n"); - return NULL; - } - - val = gen_pool_alloc(atomic_pool, size); - if (val) { - phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val); - - *ret_page = pfn_to_page(__phys_to_pfn(phys)); - ptr = (void *)val; - memset(ptr, 0, size); - } - - return ptr; -} - -bool dma_free_from_pool(void *start, size_t size) -{ - if (!dma_in_atomic_pool(start, size)) - return false; - gen_pool_free(atomic_pool, (unsigned long)start, size); - return true; -} -#endif /* CONFIG_DMA_DIRECT_REMAP */ From patchwork Fri Sep 25 16:18:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1761CC4742D for ; Fri, 25 Sep 2020 16:20:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3BCF20809 for ; Fri, 25 Sep 2020 16:20:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tiRwq42b" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729557AbgIYQUE (ORCPT ); Fri, 25 Sep 2020 12:20:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729556AbgIYQUE (ORCPT ); Fri, 25 Sep 2020 12:20:04 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBC9BC0613CE for ; Fri, 25 Sep 2020 09:20:03 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id u6so2358950qte.8 for ; Fri, 25 Sep 2020 09:20:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=6BYT7TOwiFd3r1FHsuYqLLsEnaJltT8MOhfwia7b0Oc=; b=tiRwq42bbPhwbbz5z9f1wtIvFOiTtfY8M5oRU6hIyfrRVO9XolLAJt3PoBqRbo45D0 hdahtWKaqwf1XdhPL3UgAxEh0NlDlpY/ueelqkK3/H+DyPy83jFKhT6Qm7Tin/6viZyM c01McR6a1uPO8GAt4ENd7g3tF4VUtj2gxTXDhuzpioB5bW0h927QYFUcjVkRtJxdkgDn xlTZNU2Lwme24APZsR1yqJDmnOosGJ6QdiE/3fClYtojJ1pMoQqsYDnURfvm/Ykc6gGv YTYCBpYuR2EDJ0Qzuqa2UQ3wdxm7xsZRrIjmYdXRHH/VRz8T29obJKHhNccC66w1jjKe mYEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6BYT7TOwiFd3r1FHsuYqLLsEnaJltT8MOhfwia7b0Oc=; b=N6v7chJb7N/ftMIGePMNqejFdGNMtcgf4ePsITX37t8BJ6z0/mHHef3jskOoTQ3O3j XErUpPhddWcA53Q7D/vF/ymVIwpo3qXc4hhFFVi+22DjcMVRzBtHm2OwbzTo3Q0vwxzn LvzcsVvEbXsb67ciYTvEVXKN5aHyqqYMrbkWIk6svfFN9WkqshZY7S8tjmQVJgzuXYt1 BRKfzyryXZKsuMwvOQf3v/3gYrgI6b0EsHfqQJX7dsiN3QuTLsdUtRuha5VLvXjl16KP tP/WRHD2ftRiw7XiBEJ+EoLVUfGcKpwNrfKguj+0xYadZfKNcDUGjzk/tGvSMR+ED7PB AY2A== X-Gm-Message-State: AOAM530n+lXmHAAYKSUyVdOnJ8KOey0OtXNYh7y4LA3JoTQMZiAu1Avc qD1tcoFX8vQNFhkJcgHAhNIsQBfvYm5meaC5d8VJCiFt2a/BL97arCMzdiOGO4ujzH2r3qvH6rC rT3stA6+i73ndam8tjQaQvDlqJKARnpQwHcLhITzB3KugIBYEiw6pc1biF7q/0A== X-Google-Smtp-Source: ABdhPJxWMkAws6dOAlYIOcJsTLIdusI8HO5dhzNWm75Wt38jRMYjCe9vSq5byAbuSCrNSfHJ3v3hiFIspeU= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a0c:b308:: with SMTP id s8mr168010qve.31.1601050802973; Fri, 25 Sep 2020 09:20:02 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:53 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-8-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 07/30 for 5.4] dma-pool: add additional coherent pools to map to gfp mask From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream c84dc6e68a1d2464e050d9694be4e4ff49e32bfd commit. The single atomic pool is allocated from the lowest zone possible since it is guaranteed to be applicable for any DMA allocation. Devices may allocate through the DMA API but not have a strict reliance on GFP_DMA memory. Since the atomic pool will be used for all non-blockable allocations, returning all memory from ZONE_DMA may unnecessarily deplete the zone. Provision for multiple atomic pools that will map to the optimal gfp mask of the device. When allocating non-blockable memory, determine the optimal gfp mask of the device and use the appropriate atomic pool. The coherent DMA mask will remain the same between allocation and free and, thus, memory will be freed to the same atomic pool it was allocated from. __dma_atomic_pool_init() will be changed to return struct gen_pool * later once dynamic expansion is added. Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- drivers/iommu/dma-iommu.c | 5 +- include/linux/dma-direct.h | 2 + include/linux/dma-mapping.h | 6 +- kernel/dma/direct.c | 10 +-- kernel/dma/pool.c | 120 +++++++++++++++++++++++------------- 5 files changed, 90 insertions(+), 53 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 76bd2309e023..b642c1123a29 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -927,7 +927,7 @@ static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) /* Non-coherent atomic allocation? Easy */ if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_free_from_pool(cpu_addr, alloc_size)) + dma_free_from_pool(dev, cpu_addr, alloc_size)) return; if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) { @@ -1010,7 +1010,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !gfpflags_allow_blocking(gfp) && !coherent) - cpu_addr = dma_alloc_from_pool(PAGE_ALIGN(size), &page, gfp); + cpu_addr = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, + gfp); else cpu_addr = iommu_dma_alloc_pages(dev, size, &page, gfp, attrs); if (!cpu_addr) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 6db863c3eb93..fb5ec847ddf3 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -66,6 +66,8 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) } u64 dma_direct_get_required_mask(struct device *dev); +gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, + u64 *phys_mask); void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4d450672b7d6..e4be706d8f5e 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -633,9 +633,9 @@ void *dma_common_pages_remap(struct page **pages, size_t size, pgprot_t prot, const void *caller); void dma_common_free_remap(void *cpu_addr, size_t size); -bool dma_in_atomic_pool(void *start, size_t size); -void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags); -bool dma_free_from_pool(void *start, size_t size); +void *dma_alloc_from_pool(struct device *dev, size_t size, + struct page **ret_page, gfp_t flags); +bool dma_free_from_pool(struct device *dev, void *start, size_t size); int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index d30c5468a91a..38266fb2797d 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -58,8 +58,8 @@ u64 dma_direct_get_required_mask(struct device *dev) return (1ULL << (fls64(max_dma) - 1)) * 2 - 1; } -static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, - u64 *phys_mask) +gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, + u64 *phys_mask) { if (dev->bus_dma_mask && dev->bus_dma_mask < dma_mask) dma_mask = dev->bus_dma_mask; @@ -103,7 +103,7 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, /* we always manually zero the memory once we are done: */ gfp &= ~__GFP_ZERO; - gfp |= __dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, + gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, &phys_mask); page = dma_alloc_contiguous(dev, alloc_size, gfp); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { @@ -142,7 +142,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs) && !gfpflags_allow_blocking(gfp)) { - ret = dma_alloc_from_pool(PAGE_ALIGN(size), &page, gfp); + ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp); if (!ret) return NULL; goto done; @@ -225,7 +225,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, } if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_free_from_pool(cpu_addr, PAGE_ALIGN(size))) + dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) return; if (force_dma_unencrypted(dev)) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 3df5d9d39922..db4f89ac5f5f 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -10,7 +10,9 @@ #include #include -static struct gen_pool *atomic_pool __ro_after_init; +static struct gen_pool *atomic_pool_dma __ro_after_init; +static struct gen_pool *atomic_pool_dma32 __ro_after_init; +static struct gen_pool *atomic_pool_kernel __ro_after_init; #define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE; @@ -22,89 +24,119 @@ static int __init early_coherent_pool(char *p) } early_param("coherent_pool", early_coherent_pool); -static gfp_t dma_atomic_pool_gfp(void) +static int __init __dma_atomic_pool_init(struct gen_pool **pool, + size_t pool_size, gfp_t gfp) { - if (IS_ENABLED(CONFIG_ZONE_DMA)) - return GFP_DMA; - if (IS_ENABLED(CONFIG_ZONE_DMA32)) - return GFP_DMA32; - return GFP_KERNEL; -} - -static int __init dma_atomic_pool_init(void) -{ - unsigned int pool_size_order = get_order(atomic_pool_size); - unsigned long nr_pages = atomic_pool_size >> PAGE_SHIFT; + const unsigned int order = get_order(pool_size); + const unsigned long nr_pages = pool_size >> PAGE_SHIFT; struct page *page; void *addr; int ret; if (dev_get_cma_area(NULL)) - page = dma_alloc_from_contiguous(NULL, nr_pages, - pool_size_order, false); + page = dma_alloc_from_contiguous(NULL, nr_pages, order, false); else - page = alloc_pages(dma_atomic_pool_gfp(), pool_size_order); + page = alloc_pages(gfp, order); if (!page) goto out; - arch_dma_prep_coherent(page, atomic_pool_size); + arch_dma_prep_coherent(page, pool_size); - atomic_pool = gen_pool_create(PAGE_SHIFT, -1); - if (!atomic_pool) + *pool = gen_pool_create(PAGE_SHIFT, -1); + if (!*pool) goto free_page; - addr = dma_common_contiguous_remap(page, atomic_pool_size, + addr = dma_common_contiguous_remap(page, pool_size, pgprot_dmacoherent(PAGE_KERNEL), __builtin_return_address(0)); if (!addr) goto destroy_genpool; - ret = gen_pool_add_virt(atomic_pool, (unsigned long)addr, - page_to_phys(page), atomic_pool_size, -1); + ret = gen_pool_add_virt(*pool, (unsigned long)addr, page_to_phys(page), + pool_size, -1); if (ret) goto remove_mapping; - gen_pool_set_algo(atomic_pool, gen_pool_first_fit_order_align, NULL); + gen_pool_set_algo(*pool, gen_pool_first_fit_order_align, NULL); - pr_info("DMA: preallocated %zu KiB pool for atomic allocations\n", - atomic_pool_size / 1024); + pr_info("DMA: preallocated %zu KiB %pGg pool for atomic allocations\n", + pool_size >> 10, &gfp); return 0; remove_mapping: - dma_common_free_remap(addr, atomic_pool_size); + dma_common_free_remap(addr, pool_size); destroy_genpool: - gen_pool_destroy(atomic_pool); - atomic_pool = NULL; + gen_pool_destroy(*pool); + *pool = NULL; free_page: if (!dma_release_from_contiguous(NULL, page, nr_pages)) - __free_pages(page, pool_size_order); + __free_pages(page, order); out: - pr_err("DMA: failed to allocate %zu KiB pool for atomic coherent allocation\n", - atomic_pool_size / 1024); + pr_err("DMA: failed to allocate %zu KiB %pGg pool for atomic allocation\n", + pool_size >> 10, &gfp); return -ENOMEM; } + +static int __init dma_atomic_pool_init(void) +{ + int ret = 0; + int err; + + ret = __dma_atomic_pool_init(&atomic_pool_kernel, atomic_pool_size, + GFP_KERNEL); + if (IS_ENABLED(CONFIG_ZONE_DMA)) { + err = __dma_atomic_pool_init(&atomic_pool_dma, + atomic_pool_size, GFP_DMA); + if (!ret && err) + ret = err; + } + if (IS_ENABLED(CONFIG_ZONE_DMA32)) { + err = __dma_atomic_pool_init(&atomic_pool_dma32, + atomic_pool_size, GFP_DMA32); + if (!ret && err) + ret = err; + } + return ret; +} postcore_initcall(dma_atomic_pool_init); -bool dma_in_atomic_pool(void *start, size_t size) +static inline struct gen_pool *dev_to_pool(struct device *dev) { - if (unlikely(!atomic_pool)) - return false; + u64 phys_mask; + gfp_t gfp; + + gfp = dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, + &phys_mask); + if (IS_ENABLED(CONFIG_ZONE_DMA) && gfp == GFP_DMA) + return atomic_pool_dma; + if (IS_ENABLED(CONFIG_ZONE_DMA32) && gfp == GFP_DMA32) + return atomic_pool_dma32; + return atomic_pool_kernel; +} - return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); +static bool dma_in_atomic_pool(struct device *dev, void *start, size_t size) +{ + struct gen_pool *pool = dev_to_pool(dev); + + if (unlikely(!pool)) + return false; + return gen_pool_has_addr(pool, (unsigned long)start, size); } -void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) +void *dma_alloc_from_pool(struct device *dev, size_t size, + struct page **ret_page, gfp_t flags) { + struct gen_pool *pool = dev_to_pool(dev); unsigned long val; void *ptr = NULL; - if (!atomic_pool) { - WARN(1, "coherent pool not initialised!\n"); + if (!pool) { + WARN(1, "%pGg atomic pool not initialised!\n", &flags); return NULL; } - val = gen_pool_alloc(atomic_pool, size); + val = gen_pool_alloc(pool, size); if (val) { - phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val); + phys_addr_t phys = gen_pool_virt_to_phys(pool, val); *ret_page = pfn_to_page(__phys_to_pfn(phys)); ptr = (void *)val; @@ -114,10 +146,12 @@ void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) return ptr; } -bool dma_free_from_pool(void *start, size_t size) +bool dma_free_from_pool(struct device *dev, void *start, size_t size) { - if (!dma_in_atomic_pool(start, size)) + struct gen_pool *pool = dev_to_pool(dev); + + if (!dma_in_atomic_pool(dev, start, size)) return false; - gen_pool_free(atomic_pool, (unsigned long)start, size); + gen_pool_free(pool, (unsigned long)start, size); return true; } From patchwork Fri Sep 25 16:18:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EDDEC4363D for ; Fri, 25 Sep 2020 16:20:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C041A20809 for ; Fri, 25 Sep 2020 16:20:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="E4CpmiOb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729570AbgIYQUG (ORCPT ); Fri, 25 Sep 2020 12:20:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729567AbgIYQUF (ORCPT ); Fri, 25 Sep 2020 12:20:05 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83CC3C0613CE for ; Fri, 25 Sep 2020 09:20:05 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id l67so3120563ybb.7 for ; Fri, 25 Sep 2020 09:20:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=3A+/kwzGbO/TAxL8eCr39Bh+YR+CW2CNiJ25hfgv7AM=; b=E4CpmiObQbaVWXASFuBIEhScN4cOn3u2FQYr3TsoOZmgpYJlEJD163aKvgLei0jnsp ncVtNYkHgCdqHDncNIZVGmSKpmXGDRID0UwFs7Nh7Qxb1c292gsaExZt/+rhPVk3kEyj svgq+DFHAtw6FVZs1BaD0JdNO6Yj885SmmKUTrOkwvyGk5IjH1t1AL493qE+noc9QoWm r04ZNggsCxVWxY1cn9+pitLsfl4JYDhstciI5ozex9fYsRFjFIIf+7CcqmJcv0p4r4EF zmgCtP05pzWOW/VR4eFRIDP4tLGyrfTTdhC7FoJeSOnwt9AnfM8nMfdB8ggGJY65R7U7 ybIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3A+/kwzGbO/TAxL8eCr39Bh+YR+CW2CNiJ25hfgv7AM=; b=riWr3jSbla01xtbpacYnSIsjxDMszVHA7AYuLuWF3MQ8H6T69GqOhoYH/xPw6g8Bc/ qCyOiK18ILLvLt85SO59EUp34v4IBumlWB9ePkX5QHyS1qlf9zQHkB9Fhgyb0psXVSTp ZY7Oqu6LEuM54UirpAvnfSlOsVnTqrdYeLu8VCMO/FC/0nAdoYn1pFXMw9dpRUrRk7et 17yWk3wUGyz0PfNe+xmmuEb/30a+8yHHr8YkVJ9EapRD5LJAJwvVI8j9b2WI837npsT4 RB7kd+j0Dn2Bwyhx/vsY5hm99eudV+8WKLVRcFky2ZpCA2kM2qJc6XZY10CCvAxqnUGR tt8g== X-Gm-Message-State: AOAM531+q34/o0QAnZZ9KfLJjssM3ASLXKOoWAd4TFXyUPqY09Ia2zMm N/Cq0sjrPl3xmp0IbqQiJ3kANy0BfH92mdXoAAR0PuPvBcTtfADFrzV5GOyNPlZiljXas4rPlaJ WS1pX1ts5wuRora0+98oc8Ss3rmDGKa9U4wEug7Hz+3YkrB6EywMpqKfho+Veqw== X-Google-Smtp-Source: ABdhPJxEpju4oPLchgTv+HVrE4kvlkFAhgXU0A37m071Y61Nbj2s0khlV8u53RrwAdoFG+f1K9WeiihAe0M= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a25:ab8e:: with SMTP id v14mr6541576ybi.465.1601050804635; Fri, 25 Sep 2020 09:20:04 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:54 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-9-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 08/30 for 5.4] dma-pool: dynamically expanding atomic pools From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 54adadf9b08571fb8b11dc9d0d3a2ddd39825efd commit. When an atomic pool becomes fully depleted because it is now relied upon for all non-blocking allocations through the DMA API, allow background expansion of each pool by a kworker. When an atomic pool has less than the default size of memory left, kick off a kworker to dynamically expand the pool in the background. The pool is doubled in size, up to MAX_ORDER-1. If memory cannot be allocated at the requested order, smaller allocation(s) are attempted. This allows the default size to be kept quite low when one or more of the atomic pools is not used. Allocations for lowmem should also use GFP_KERNEL for the benefits of reclaim, so use GFP_KERNEL | GFP_DMA and GFP_KERNEL | GFP_DMA32 for lowmem allocations. This also allows __dma_atomic_pool_init() to return a pointer to the pool to make initialization cleaner. Also switch over some node ids to the more appropriate NUMA_NO_NODE. Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 122 +++++++++++++++++++++++++++++++--------------- 1 file changed, 84 insertions(+), 38 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index db4f89ac5f5f..ffe866c2c034 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -9,13 +9,17 @@ #include #include #include +#include static struct gen_pool *atomic_pool_dma __ro_after_init; static struct gen_pool *atomic_pool_dma32 __ro_after_init; static struct gen_pool *atomic_pool_kernel __ro_after_init; #define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K -static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE; +static size_t atomic_pool_size = DEFAULT_DMA_COHERENT_POOL_SIZE; + +/* Dynamic background expansion when the atomic pool is near capacity */ +static struct work_struct atomic_pool_work; static int __init early_coherent_pool(char *p) { @@ -24,76 +28,116 @@ static int __init early_coherent_pool(char *p) } early_param("coherent_pool", early_coherent_pool); -static int __init __dma_atomic_pool_init(struct gen_pool **pool, - size_t pool_size, gfp_t gfp) +static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, + gfp_t gfp) { - const unsigned int order = get_order(pool_size); - const unsigned long nr_pages = pool_size >> PAGE_SHIFT; + unsigned int order; struct page *page; void *addr; - int ret; + int ret = -ENOMEM; + + /* Cannot allocate larger than MAX_ORDER-1 */ + order = min(get_order(pool_size), MAX_ORDER-1); + + do { + pool_size = 1 << (PAGE_SHIFT + order); - if (dev_get_cma_area(NULL)) - page = dma_alloc_from_contiguous(NULL, nr_pages, order, false); - else - page = alloc_pages(gfp, order); + if (dev_get_cma_area(NULL)) + page = dma_alloc_from_contiguous(NULL, 1 << order, + order, false); + else + page = alloc_pages(gfp, order); + } while (!page && order-- > 0); if (!page) goto out; arch_dma_prep_coherent(page, pool_size); - *pool = gen_pool_create(PAGE_SHIFT, -1); - if (!*pool) - goto free_page; - addr = dma_common_contiguous_remap(page, pool_size, pgprot_dmacoherent(PAGE_KERNEL), __builtin_return_address(0)); if (!addr) - goto destroy_genpool; + goto free_page; - ret = gen_pool_add_virt(*pool, (unsigned long)addr, page_to_phys(page), - pool_size, -1); + ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page), + pool_size, NUMA_NO_NODE); if (ret) goto remove_mapping; - gen_pool_set_algo(*pool, gen_pool_first_fit_order_align, NULL); - pr_info("DMA: preallocated %zu KiB %pGg pool for atomic allocations\n", - pool_size >> 10, &gfp); return 0; remove_mapping: dma_common_free_remap(addr, pool_size); -destroy_genpool: - gen_pool_destroy(*pool); - *pool = NULL; free_page: - if (!dma_release_from_contiguous(NULL, page, nr_pages)) + if (!dma_release_from_contiguous(NULL, page, 1 << order)) __free_pages(page, order); out: - pr_err("DMA: failed to allocate %zu KiB %pGg pool for atomic allocation\n", - pool_size >> 10, &gfp); - return -ENOMEM; + return ret; +} + +static void atomic_pool_resize(struct gen_pool *pool, gfp_t gfp) +{ + if (pool && gen_pool_avail(pool) < atomic_pool_size) + atomic_pool_expand(pool, gen_pool_size(pool), gfp); +} + +static void atomic_pool_work_fn(struct work_struct *work) +{ + if (IS_ENABLED(CONFIG_ZONE_DMA)) + atomic_pool_resize(atomic_pool_dma, + GFP_KERNEL | GFP_DMA); + if (IS_ENABLED(CONFIG_ZONE_DMA32)) + atomic_pool_resize(atomic_pool_dma32, + GFP_KERNEL | GFP_DMA32); + atomic_pool_resize(atomic_pool_kernel, GFP_KERNEL); +} + +static __init struct gen_pool *__dma_atomic_pool_init(size_t pool_size, + gfp_t gfp) +{ + struct gen_pool *pool; + int ret; + + pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE); + if (!pool) + return NULL; + + gen_pool_set_algo(pool, gen_pool_first_fit_order_align, NULL); + + ret = atomic_pool_expand(pool, pool_size, gfp); + if (ret) { + gen_pool_destroy(pool); + pr_err("DMA: failed to allocate %zu KiB %pGg pool for atomic allocation\n", + pool_size >> 10, &gfp); + return NULL; + } + + pr_info("DMA: preallocated %zu KiB %pGg pool for atomic allocations\n", + gen_pool_size(pool) >> 10, &gfp); + return pool; } static int __init dma_atomic_pool_init(void) { int ret = 0; - int err; - ret = __dma_atomic_pool_init(&atomic_pool_kernel, atomic_pool_size, - GFP_KERNEL); + INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); + + atomic_pool_kernel = __dma_atomic_pool_init(atomic_pool_size, + GFP_KERNEL); + if (!atomic_pool_kernel) + ret = -ENOMEM; if (IS_ENABLED(CONFIG_ZONE_DMA)) { - err = __dma_atomic_pool_init(&atomic_pool_dma, - atomic_pool_size, GFP_DMA); - if (!ret && err) - ret = err; + atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size, + GFP_KERNEL | GFP_DMA); + if (!atomic_pool_dma) + ret = -ENOMEM; } if (IS_ENABLED(CONFIG_ZONE_DMA32)) { - err = __dma_atomic_pool_init(&atomic_pool_dma32, - atomic_pool_size, GFP_DMA32); - if (!ret && err) - ret = err; + atomic_pool_dma32 = __dma_atomic_pool_init(atomic_pool_size, + GFP_KERNEL | GFP_DMA32); + if (!atomic_pool_dma32) + ret = -ENOMEM; } return ret; } @@ -142,6 +186,8 @@ void *dma_alloc_from_pool(struct device *dev, size_t size, ptr = (void *)val; memset(ptr, 0, size); } + if (gen_pool_avail(pool) < atomic_pool_size) + schedule_work(&atomic_pool_work); return ptr; } From patchwork Fri Sep 25 16:18:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6618C47428 for ; Fri, 25 Sep 2020 16:20:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C3C220809 for ; Fri, 25 Sep 2020 16:20:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FezkcosD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729574AbgIYQUJ (ORCPT ); Fri, 25 Sep 2020 12:20:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729383AbgIYQUH (ORCPT ); Fri, 25 Sep 2020 12:20:07 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A8DEC0613CE for ; Fri, 25 Sep 2020 09:20:07 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id e13so2680561pgk.6 for ; Fri, 25 Sep 2020 09:20:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=HNZdcr/0v7L1U5VM5INM0PY4OF1ckS3zSaQjXDPbL+w=; b=FezkcosDc50r26emWpUhtfPd1l+oYxqoR5iUMk/Mk6yHWAHLKA4ioIwAM/Pe0V4Zex ZvzQrgRWkdJmo+KbTvQNRpJzTmCnl7u6A/VTOG0pHcplzm+4PuTJgJ7nZheYHvxQ67rP FrJwY46wjl3Tfu2qIhR1oYDDrcfYlMaIaPcjur0N6mEE/lojhXiMo+dsYtFYyeOd+/m9 4A5HQ0DUNXIxxPRPeGZWTodVZ1zZGjoBS9JIbQgqx6XbwEUZCXiOMB5NdDngE3C7DiMp xK6aNg40E7EWEReFvUaiyrafuBj7EpluBNU7thqL2YoXT7rSVw2WZRTWo2eWGXQd47Sw Uirg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HNZdcr/0v7L1U5VM5INM0PY4OF1ckS3zSaQjXDPbL+w=; b=K9CHK4d/ejRXs4FKU+JovY0eYY7IcTvbpxPvluC63wFxBb/6blXhTZrjFvHvVWynzn MVx3XX+hOSfytgEJbj0p2fBxnFBreNs7HPHJ8ogwnO+00IObYA0w7zg5DGFtCvt/IraO S6O0QFgNTcHNuJtDofOjCIvl4G6CqYBetItIoJC7FAwwq3yPhByLGgSZ6lCAFpWXrbE9 tAPGQlnfzGUPH6VaDsJuShtMy2Yg0IOcWFYh2iPTDungI6J36PViLDYQclYs3Q5L+bNP CIjHJePiUMv05yHU2O0E+775iq2Z0U7h85feF3WCznYJmfpE1fzwFjq9/7eU+QFwSW8u T2xQ== X-Gm-Message-State: AOAM533V/jL1OpCdWLu6mJEyRjS60ArjrYnBJeuy7EMHCi+MKS9MAuml S/lIVrOp8rVvsD8y22tPXEHdeW/HPxPBjU/AMpjjiKDhYKdJ2kjz8YXzni36/HGvniSPqCuQD3m jvDn04HGM2dX6xw02ekW5VAWUbPgkmfn6NUBs8HJIIkGYs6VKUfU1Z1R2770Hrw== X-Google-Smtp-Source: ABdhPJxtr8iJry8Mf8unsAYGACC+4j9H0Ej41oelXy5ObZu7Oa1swNq2IFWodY7PRJ34ZSzK6x6IcclKabI= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:aa7:8dc9:0:b029:150:e9ad:952 with SMTP id j9-20020aa78dc90000b0290150e9ad0952mr106411pfr.61.1601050806498; Fri, 25 Sep 2020 09:20:06 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:55 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-10-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 09/30 for 5.4] dma-direct: atomic allocations must come from atomic coherent pools From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 76a19940bd62a81148c303f3df6d0cee9ae4b509 commit. When a device requires unencrypted memory and the context does not allow blocking, memory must be returned from the atomic coherent pools. This avoids the remap when CONFIG_DMA_DIRECT_REMAP is not enabled and the config only requires CONFIG_DMA_COHERENT_POOL. This will be used for CONFIG_AMD_MEM_ENCRYPT in a subsequent patch. Keep all memory in these pools unencrypted. When set_memory_decrypted() fails, this prohibits the memory from being added. If adding memory to the genpool fails, and set_memory_encrypted() subsequently fails, there is no alternative other than leaking the memory. Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 46 ++++++++++++++++++++++++++++++++++++++------- kernel/dma/pool.c | 27 +++++++++++++++++++++++--- 2 files changed, 63 insertions(+), 10 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 38266fb2797d..210ea469028c 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -90,6 +90,39 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_mask); } +/* + * Decrypting memory is allowed to block, so if this device requires + * unencrypted memory it must come from atomic pools. + */ +static inline bool dma_should_alloc_from_pool(struct device *dev, gfp_t gfp, + unsigned long attrs) +{ + if (!IS_ENABLED(CONFIG_DMA_COHERENT_POOL)) + return false; + if (gfpflags_allow_blocking(gfp)) + return false; + if (force_dma_unencrypted(dev)) + return true; + if (!IS_ENABLED(CONFIG_DMA_DIRECT_REMAP)) + return false; + if (dma_alloc_need_uncached(dev, attrs)) + return true; + return false; +} + +static inline bool dma_should_free_from_pool(struct device *dev, + unsigned long attrs) +{ + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL)) + return true; + if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && + !force_dma_unencrypted(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP)) + return true; + return false; +} + struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp, unsigned long attrs) { @@ -139,9 +172,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, struct page *page; void *ret; - if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs) && - !gfpflags_allow_blocking(gfp)) { + if (dma_should_alloc_from_pool(dev, gfp, attrs)) { ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp); if (!ret) return NULL; @@ -217,6 +248,11 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ + if (dma_should_free_from_pool(dev, attrs) && + dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) + return; + if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ @@ -224,10 +260,6 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, return; } - if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) - return; - if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index ffe866c2c034..c8d61b3a7bd6 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -53,22 +54,42 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, arch_dma_prep_coherent(page, pool_size); +#ifdef CONFIG_DMA_DIRECT_REMAP addr = dma_common_contiguous_remap(page, pool_size, pgprot_dmacoherent(PAGE_KERNEL), __builtin_return_address(0)); if (!addr) goto free_page; - +#else + addr = page_to_virt(page); +#endif + /* + * Memory in the atomic DMA pools must be unencrypted, the pools do not + * shrink so no re-encryption occurs in dma_direct_free_pages(). + */ + ret = set_memory_decrypted((unsigned long)page_to_virt(page), + 1 << order); + if (ret) + goto remove_mapping; ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page), pool_size, NUMA_NO_NODE); if (ret) - goto remove_mapping; + goto encrypt_mapping; return 0; +encrypt_mapping: + ret = set_memory_encrypted((unsigned long)page_to_virt(page), + 1 << order); + if (WARN_ON_ONCE(ret)) { + /* Decrypt succeeded but encrypt failed, purposely leak */ + goto out; + } remove_mapping: +#ifdef CONFIG_DMA_DIRECT_REMAP dma_common_free_remap(addr, pool_size); -free_page: +#endif +free_page: __maybe_unused if (!dma_release_from_contiguous(NULL, page, 1 << order)) __free_pages(page, order); out: From patchwork Fri Sep 25 16:18:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07A3CC47423 for ; Fri, 25 Sep 2020 16:20:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B67D123600 for ; Fri, 25 Sep 2020 16:20:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YLraY8jt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729383AbgIYQUJ (ORCPT ); Fri, 25 Sep 2020 12:20:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729575AbgIYQUJ (ORCPT ); Fri, 25 Sep 2020 12:20:09 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34E37C0613CE for ; Fri, 25 Sep 2020 09:20:09 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id t56so2341281qtt.19 for ; Fri, 25 Sep 2020 09:20:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=QcerwLJt+BrBlXaZq3EkwdHRTge1NDWtXmvqdmYSPwQ=; b=YLraY8jttzY0GnBwgPXzo57I4pGhtveHN311BpcSxRNGDE/GczHSPmZPIoc+S9pv79 HAxm2HHC4kamAgAeOTZnXHkgGy42+Q+D2ShMiEjS8V3xI5J+lmVFUmU/PEhbXjFAUhJR /5beU2trJ/TsJtWX1YJoT40BnwT1dVBU/wD7Wtip1wj44UjTfkalHz1TIhHmtFTXWsNt CTfpFmYuPhbKxGOguookK07CMqyu529hK1SSkrolX18FzSt870hKdZ4PmpsvMN26Kgvf yZz4Q1sPWGUKG0ppET0ei9mYKiKZSfkuIhqtzAqUpMGLY60xSCIANeHIsEhJfBT6fnkl fqSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QcerwLJt+BrBlXaZq3EkwdHRTge1NDWtXmvqdmYSPwQ=; b=orkPIMTLRlIbIg09RbdtLKfrQ6D25SaG91Hx4OvNQvtpHDXFIqoC6qTaHL3Vtd2yVM lwV1UP6Y6zXszPStak/OqWJs50VxdLjeE9PotkWqBot+XtGfd9Lly2y4T66k0ucdn7vP 2VhMbUA+g44WBQeIMB5uz8VkVoKsOBR48P4GfAYew8VCiVZkFO2hTlpeIuRgG6/Dd41i X7JWJ3NBciXLaR5ucg/Ac/pkX864I9ynLEcqfHcCVlfj5fFNLpkh4g/WcwsbQO8xxxKb 6TbEtPPEparAJfL+0Py0/wOoo2fR6zzg17XIgSkgRv7bWE7Cap90/CBNuBNUFZQAqigE PYrg== X-Gm-Message-State: AOAM533CkhdHmidEkGp1FK7yC+PBlfcPY99hmG1B5x8bmbw9+IcVU4LZ HEZk0jtfHVBM25eiNtYthqLamIzuEIY+4JPd3IVohjJrBjnigMxxFd8EIRvea3Eu28GQxKcJ6Vs Ia5pIC3gImyYH2mW7/G1fWC1dh7XM3iTIRINtzNOW1tsCZewkFlePzI4tE3qJtA== X-Google-Smtp-Source: ABdhPJy1XzGiyNKMCHwbtH0N4ZYRvvSWvgNkSnlEiqZ/OtkVhg2cdVb8Tot9m287lS8xAZekEB3Kzd/UQys= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:ad4:5a0e:: with SMTP id ei14mr176956qvb.15.1601050808300; Fri, 25 Sep 2020 09:20:08 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:56 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-11-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 10/30 for 5.4] dma-pool: add pool sizes to debugfs From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , David Rientjes Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 2edc5bb3c5cc42131438460a50b7b16905c81c2a commit. The atomic DMA pools can dynamically expand based on non-blocking allocations that need to use it. Export the sizes of each of these pools, in bytes, through debugfs for measurement. Suggested-by: Christoph Hellwig Signed-off-by: David Rientjes [hch: remove the !CONFIG_DEBUG_FS stubs] Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index c8d61b3a7bd6..dde6de7f8e83 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -3,6 +3,7 @@ * Copyright (C) 2012 ARM Ltd. * Copyright (C) 2020 Google LLC */ +#include #include #include #include @@ -13,8 +14,11 @@ #include static struct gen_pool *atomic_pool_dma __ro_after_init; +static unsigned long pool_size_dma; static struct gen_pool *atomic_pool_dma32 __ro_after_init; +static unsigned long pool_size_dma32; static struct gen_pool *atomic_pool_kernel __ro_after_init; +static unsigned long pool_size_kernel; #define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K static size_t atomic_pool_size = DEFAULT_DMA_COHERENT_POOL_SIZE; @@ -29,6 +33,29 @@ static int __init early_coherent_pool(char *p) } early_param("coherent_pool", early_coherent_pool); +static void __init dma_atomic_pool_debugfs_init(void) +{ + struct dentry *root; + + root = debugfs_create_dir("dma_pools", NULL); + if (IS_ERR_OR_NULL(root)) + return; + + debugfs_create_ulong("pool_size_dma", 0400, root, &pool_size_dma); + debugfs_create_ulong("pool_size_dma32", 0400, root, &pool_size_dma32); + debugfs_create_ulong("pool_size_kernel", 0400, root, &pool_size_kernel); +} + +static void dma_atomic_pool_size_add(gfp_t gfp, size_t size) +{ + if (gfp & __GFP_DMA) + pool_size_dma += size; + else if (gfp & __GFP_DMA32) + pool_size_dma32 += size; + else + pool_size_kernel += size; +} + static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, gfp_t gfp) { @@ -76,6 +103,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, if (ret) goto encrypt_mapping; + dma_atomic_pool_size_add(gfp, pool_size); return 0; encrypt_mapping: @@ -160,6 +188,8 @@ static int __init dma_atomic_pool_init(void) if (!atomic_pool_dma32) ret = -ENOMEM; } + + dma_atomic_pool_debugfs_init(); return ret; } postcore_initcall(dma_atomic_pool_init); From patchwork Fri Sep 25 16:18:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27171C4727C for ; Fri, 25 Sep 2020 16:20:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C77D620809 for ; Fri, 25 Sep 2020 16:20:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JnoZJeTK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729577AbgIYQUL (ORCPT ); Fri, 25 Sep 2020 12:20:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729575AbgIYQUK (ORCPT ); Fri, 25 Sep 2020 12:20:10 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2AB1C0613CE for ; Fri, 25 Sep 2020 09:20:10 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id 205so2332344qkd.2 for ; Fri, 25 Sep 2020 09:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=akC9fS9hfa1g5J4aF13FcmNLCU553fW6aB4jLeCPAgo=; b=JnoZJeTK9FdxutyNiGss8AJS2y073pDqaM65POu5LWEhZ7wYjMpc7yRHWdTnwGvNTN xifVm3Yeo3kJvXEOJqgs0dy6xEsG0MQS8i9DjNkwDe9kzTaE7xaLiejlNudEoHZUyaWy mt75Jc65i4OP1dozCX5ZVWU1JN/3SLFRNUFKHLFrw5228cU7B40yHmKDGHAc6ugy9NLy gnXO/sOP3Maeio0wFwFVqUiwQ7gMq4P35fkIBCwBwiNCnzaGcde0pG1b55MX/SaHyuun xgRDdhHGpYNAYk6D9NZo3ku0cmouIsxW1MT9OkLRqllhbuPOt7MD49vfesPt60AITEL7 D3oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=akC9fS9hfa1g5J4aF13FcmNLCU553fW6aB4jLeCPAgo=; b=hCkN2cj1fxMwlcRWyUoNYfv1qOcwZNyAQX5zZYcKEBU/GkCxXX9GnzAT4PVMe3ahLE Y4GZN14w/3SqsA+6kRbLZG6ssP3EWr6nJ7e8pQsL1vBKXgD4wH5aSMusCO1Pxt9OpOV5 FCirbZ7WbYIRM80w2iGcXugURQT/7w75bKrEh1V6U3yBntDy4x+yRmH91cf9LGMI7wkj zKqK3gvDG/tcvMdoXwqy36KW/zqiwDFas5jqx1DIWaPCFLI9/c3bRfTzRn9yn417v8Ml HZmaBNOEevv1Usg+UfSmu+x16oeGm21Ill7KNFDO9Jz5mADLlm32QEkJ5jvs+5SgECqu YARg== X-Gm-Message-State: AOAM532c6cX4DcWmF8s13krYn5K5rGb0CX6f+MaUZ3w5M/DRwocApe5E XICPaqlj4lLKtBfZYViIq5XF6uDwlbo5Et7s5x6p/OfRrPPMoSDTgrA5KyZ5amYPU39Ye4KtoaG rKhLd7rEF++VyOgWF/yWHnfpFszgghGawrZj0XoZIXhbFyxuAsRjYebJ3KcgMzQ== X-Google-Smtp-Source: ABdhPJxOiQtqUDdD7+sbKkjme5nhzOeiXuYPU+Q9U0mgJYvBVekChAnWS2uEZM1W7PQU4ia9H1u3ErX+da0= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a0c:c446:: with SMTP id t6mr143634qvi.55.1601050809857; Fri, 25 Sep 2020 09:20:09 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:57 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-12-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 11/30 for 5.4] x86/mm: unencrypted non-blocking DMA allocations use coherent pools From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 82fef0ad811fb5976cf36ccc3d2c3bc0195dfb72 commit. When CONFIG_AMD_MEM_ENCRYPT is enabled and a device requires unencrypted DMA, all non-blocking allocations must originate from the atomic DMA coherent pools. Select CONFIG_DMA_COHERENT_POOL for CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 8ef85139553f..be8746e9d864 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1530,6 +1530,7 @@ config X86_CPA_STATISTICS config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD + select DMA_COHERENT_POOL select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT select ARCH_HAS_FORCE_DMA_UNENCRYPTED From patchwork Fri Sep 25 16:18:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E90CFC4742C for ; Fri, 25 Sep 2020 16:20:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9A4420809 for ; Fri, 25 Sep 2020 16:20:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DIJ/JDgb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729567AbgIYQUQ (ORCPT ); Fri, 25 Sep 2020 12:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729582AbgIYQUM (ORCPT ); Fri, 25 Sep 2020 12:20:12 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 364BBC0613D3 for ; Fri, 25 Sep 2020 09:20:12 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id t5so2336324pji.6 for ; Fri, 25 Sep 2020 09:20:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=WWigOjuEgeTY5QovcK5jlEzCHIaCCnDSJLo/xPypiZc=; b=DIJ/JDgbDHMaVxaOVN+0P1OzGYMDDepm0NR1P5omlp7BNurlBx59G3C4d/K00qZs28 /fUJiUpUxHSicPizbNETIOjr5LqEat7z0Emb2l5LpsOkK060IKNFl8gUPc1dMOwqU32L +NuL1liRyOzqkjWxVRxgLVWd6xNTowYgaVUn4L2UXGnWEINmet/YyYdmGw+56lDcSaAX UPHsV9Qwaapy5i5MdNndt1DrBM//9QUNzMwEg2QWcUh/ZtVa/wG4p4/RLPJRcFilPVYQ kar+DULVOZcJYVthgJxhUC3YDfSxW+VJdgC4yoiemkQ0L8OqwRk0Fre8OG8m9bpvNj6o 1nUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WWigOjuEgeTY5QovcK5jlEzCHIaCCnDSJLo/xPypiZc=; b=JU6UVHIV2ZYzc8Ky1pMZLtdLcZuefLj0uwDWh12qpLzzgQfXxo9EHuF0uWex54qQK0 kG2w5HTtSKTnksImV8k2DbwQZ09rF7Bk9VeW2EM9u+mFTHfvA5a9M0ld/0fV1Ta/8aQd 4fHzSg0AltpOLSoJMRpCYIlTUkZYFnLlgAP+r9xg6BjKLjL656BC+NrIjxonOtxUUmiM ugJAF0uTA3rBCWtyX/qQ4cEhTb3Rncsi+ObGe+WnPQVs5x1wFccK+kkMCzMGjZ3Z+6e7 dOrxFmM9Dofe0bpGmgtpWLd3VaxTQb8CGs4gIoO4c/Cia3aChZB78wQVrGl/m+oItn1g Dbdw== X-Gm-Message-State: AOAM5335qj9z7Ar4JTAeIwFMxgPYrtcVkgaWNzUWLgvqZPtswLoyD6Kn V/DWDvOeJSYk2dHVqDGGiGmW1oLwgjT+vvK8cYko6HQQYu0JNMRUacgYC+g7l1cMWZi0mCciuNu E5ere516LsBgEsdFmfWCgxkya3ZGUnK3WZJKE5E46LdE4ScVvA3byF+YJBfJUeg== X-Google-Smtp-Source: ABdhPJyzfpzQRbWzrMJ4JqKcZ9vjaWrMeDiw8s4fgywWIpgzKi4IiB7QnKU+IlWhcXTLl3sq8k66DzFr90I= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:902:ee06:b029:d1:8c50:b1ba with SMTP id z6-20020a170902ee06b02900d18c50b1bamr132209plb.35.1601050811452; Fri, 25 Sep 2020 09:20:11 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:58 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-13-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 12/30 for 5.4] dma-pool: scale the default DMA coherent pool size with memory capacity From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 1d659236fb43c4d2b37af7a4309681e834e9ec9a commit. When AMD memory encryption is enabled, some devices may use more than 256KB/sec from the atomic pools. It would be more appropriate to scale the default size based on memory capacity unless the coherent_pool option is used on the kernel command line. This provides a slight optimization on initial expansion and is deemed appropriate due to the increased reliance on the atomic pools. Note that the default size of 128KB per pool will normally be larger than the single coherent pool implementation since there are now up to three coherent pools (DMA, DMA32, and kernel). Note that even prior to this patch, coherent_pool= for sizes larger than 1 << (PAGE_SHIFT + MAX_ORDER-1) can fail. With new dynamic expansion support, this would be trivially extensible to allow even larger initial sizes. Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index dde6de7f8e83..35bb51c31fff 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -20,8 +20,8 @@ static unsigned long pool_size_dma32; static struct gen_pool *atomic_pool_kernel __ro_after_init; static unsigned long pool_size_kernel; -#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K -static size_t atomic_pool_size = DEFAULT_DMA_COHERENT_POOL_SIZE; +/* Size can be defined by the coherent_pool command line */ +static size_t atomic_pool_size; /* Dynamic background expansion when the atomic pool is near capacity */ static struct work_struct atomic_pool_work; @@ -170,6 +170,16 @@ static int __init dma_atomic_pool_init(void) { int ret = 0; + /* + * If coherent_pool was not used on the command line, default the pool + * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1. + */ + if (!atomic_pool_size) { + atomic_pool_size = max(totalram_pages() >> PAGE_SHIFT, 1UL) * + SZ_128K; + atomic_pool_size = min_t(size_t, atomic_pool_size, + 1 << (PAGE_SHIFT + MAX_ORDER-1)); + } INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); atomic_pool_kernel = __dma_atomic_pool_init(atomic_pool_size, From patchwork Fri Sep 25 16:18:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06706C47425 for ; Fri, 25 Sep 2020 16:20:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB58520809 for ; Fri, 25 Sep 2020 16:20:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tXNyBA0P" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729407AbgIYQUO (ORCPT ); Fri, 25 Sep 2020 12:20:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729567AbgIYQUO (ORCPT ); Fri, 25 Sep 2020 12:20:14 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 012E4C0613CE for ; Fri, 25 Sep 2020 09:20:14 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id q95so2454535pja.0 for ; Fri, 25 Sep 2020 09:20:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=LQw063vWlYfSlascwO6RX80cC/IQlselMN8Nbrys3+I=; b=tXNyBA0PPz0m9vA5qyqul5RNhNM9O+g+GvUIYeUtcICaWy2ZWbV86bRRqWt2FlSXGI MN9eNI7QQGLNqQiBN05xsnAZsLBzuIDGtFCKvoHwOsPnBcFPiME3fLjOnL7Su7QEd/QF TcHn041YScttjU+zK/eLs4Pb3ka1cEDBy1Y9gidexCKVoMrPa6KGgecMH5QC3PLC5qIv qpRFMR2pPlkBbIK+H0Ng1IFf9odfYM5AdcnXv7Hvl6Scikw1RrbJnToabFpZx+s2ZFZH WP6ozl1p1hS66/nxW1qUnWn78p5WuioVLjmgwo/S8/TtI98UzWnSM7lsDtzVkRXCQPTI J6Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LQw063vWlYfSlascwO6RX80cC/IQlselMN8Nbrys3+I=; b=oGgC/6PWUqrvh6nz3JYKS9fqIS0BzfPlAskroec8X/l7TUB0r2V0lTH39jYLlc6Gmb /XzleGI0jLsMZhZhm15dLIyUZogaPW1YAIz/uKk41ABdeOJ6qMr2cvq7VrrZSPjeXp7n W25t+Xmu8x4tTddqwgtn3g/lzr9IM41qSJ8+jR7G7GKBGCOdptDdw4yGxQfVtamOyDOF IiuTWWHNEFtLXfI4wXTbtP2SfYd38ajh0Q4ZHV74qWOdEcve9f+VoBaK7Lghe40emmFw cJrvesvkXl+aA7Oqzev4B34XhFOGETmeH+b8Ym8zQBSOaEUiPdJt6c8iPj/7kux017HV FuPg== X-Gm-Message-State: AOAM533AvHICdYKOJxeMzOcNAps+ERvyF2+5oGEbaTW+YFeMzL1tnCGB 2foZ1KQ6d+0IrAoFgoJ7S0Fo4xg7vExejdBWB0TNCPHYhr2EUo2FWgG6+Fdp8VB8nnay3OsXbZf E06jqwxOfx16Irs4O2V48q0SLPGfMd1OUViuY1cHIh0i61HTUkRBp/9IKi5dBcw== X-Google-Smtp-Source: ABdhPJwOXsOQK8GixARERRmWxLcFir6z2XI6A8ACKhHJBN+cRoFBM5n2vBDc7ygh5empiOWAJYdVkPoM46g= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:902:b7c7:b029:d1:cc21:9c38 with SMTP id v7-20020a170902b7c7b02900d1cc219c38mr147653plz.21.1601050813329; Fri, 25 Sep 2020 09:20:13 -0700 (PDT) Date: Fri, 25 Sep 2020 09:18:59 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-14-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 13/30 for 5.4] dma-pool: fix too large DMA pools on medium memory size systems From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Geert Uytterhoeven , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Geert Uytterhoeven upstream 3ee06a6d532f75f20528ff4d2c473cda36c484fe commit. On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA memory pools are much larger than intended (e.g. 2 MiB instead of 128 KiB on a 256 MiB system). Fix this by correcting the calculation of the number of GiBs of RAM in the system. Invert the order of the min/max operations, to keep on calculating in pages until the last step, which aids readability. Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity") Signed-off-by: Geert Uytterhoeven Acked-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 35bb51c31fff..8cfa01243ed2 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -175,10 +175,9 @@ static int __init dma_atomic_pool_init(void) * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1. */ if (!atomic_pool_size) { - atomic_pool_size = max(totalram_pages() >> PAGE_SHIFT, 1UL) * - SZ_128K; - atomic_pool_size = min_t(size_t, atomic_pool_size, - 1 << (PAGE_SHIFT + MAX_ORDER-1)); + unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K); + pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES); + atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K); } INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); From patchwork Fri Sep 25 16:19:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86EACC4742E for ; Fri, 25 Sep 2020 16:20:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E36220809 for ; Fri, 25 Sep 2020 16:20:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o+T+A7Ei" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729582AbgIYQUQ (ORCPT ); Fri, 25 Sep 2020 12:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728861AbgIYQUQ (ORCPT ); Fri, 25 Sep 2020 12:20:16 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E74E7C0613CE for ; Fri, 25 Sep 2020 09:20:15 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d15so3190629ybk.0 for ; Fri, 25 Sep 2020 09:20:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=tBPoyS6dqdyGc6bNgsXRT7AFAD0R9myTdEMTp1duOnc=; b=o+T+A7Ei3kHkyYxC1HxUlmv8myWpAO50ervXKXjhD2oVydHgBRQjWtQZMYjrrzYaJn C/vYSLNVehZmVFyGEqWbBkRuAkFBaXdN+p5U4we719e0aSJxv9Y5b6LLkMNlcjztDijQ UzUHRqH1L0ICQDtuGCr298y4UsLk36fxSfCOAgAXP+hNusFDbYhBIsR+HHwYA7GNuRri ovv2TkZ0g6TJwCnBcAWXZcsCFYmUmDvk+uxHZDrVHAI4GEmFz5bDdDI4AzuVRtrPO7Xh TYQsEiJMOMUXYeCJAx2Gm+g3uypnCM0q5BFxE0gSA9+APreDaWJ5KyPUIMow/klkG387 UCDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tBPoyS6dqdyGc6bNgsXRT7AFAD0R9myTdEMTp1duOnc=; b=HdFPWt49Ovgf1LTSLtMg1rjLTc8B2w+/Kro3I24+57C+5BISXdgqF5nQaSgLco7gm6 lgnhmjpz5G537f/RlQWbyOfeiFqkHJZuE+E1dsctFP79km9EaF++s9M69sRfl10FOsd+ UbIsCcbcBhVsZXKLebkxao46B5JMXgIiJ+u1+BoPSw86h/r00lnDkZ9ModauZaQYYYJa qCHBjeRm3ZtWKOx6WMwijtTnlr3BvQOyaggs0t3bdsPWmeJFNkX9Wp02XUorAoGTWwZX 94a8GAdVkoOkLlrQptcJQRURVTwU+wAZAvD1ji32VuX9vSGX/WnaKLnXBcVijQlUmSDd 6Bhw== X-Gm-Message-State: AOAM533768hwmH0eBJQ04FT0/O7d8zRhi9xrEn8RZg5i/lOyaLLmEO7c 5ZL2WeXh2F30osCkoofh2Nl0HDevsqDabp6Q7S+Kc67hzUiS9MVEzFG4O3P/sIF9PIXaCF/oRjO 4AZUw8cM452MKe6z3YdqNrOtOqnWV/muBC1n6I3ZuuSB2ZN3deqPnu/PeT9+qrg== X-Google-Smtp-Source: ABdhPJyBjcCbLksk+IYd0hbsdkJQsQNayWTLcHc3gpkfsEonIgMuDf1ohdbkV8WHQ0ggdNhqFkEkPQ5eZXQ= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a25:4002:: with SMTP id n2mr6578778yba.513.1601050815044; Fri, 25 Sep 2020 09:20:15 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:00 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-15-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 14/30 for 5.4] dma-pool: decouple DMA_REMAP from DMA_COHERENT_POOL From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Alex Xu , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream dbed452a078d56bc7f1abecc3edd6a75e8e4484e commit. DMA_REMAP is an unnecessary requirement for AMD SEV, which requires DMA_COHERENT_POOL, so avoid selecting it when it is otherwise unnecessary. The only other requirement for DMA coherent pools is DMA_DIRECT_REMAP, so ensure that properly selects the config option when needed. Fixes: 82fef0ad811f ("x86/mm: unencrypted non-blocking DMA allocations use coherent pools") Reported-by: Alex Xu (Hello71) Signed-off-by: David Rientjes Tested-by: Alex Xu (Hello71) Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/Kconfig | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index d006668c0027..a0ce3c1494fd 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -73,18 +73,18 @@ config SWIOTLB config DMA_NONCOHERENT_MMAP bool +config DMA_COHERENT_POOL + bool + config DMA_REMAP + bool depends on MMU select GENERIC_ALLOCATOR select DMA_NONCOHERENT_MMAP - bool - -config DMA_COHERENT_POOL - bool - select DMA_REMAP config DMA_DIRECT_REMAP bool + select DMA_REMAP select DMA_COHERENT_POOL config DMA_CMA From patchwork Fri Sep 25 16:19:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C21D4C4742A for ; Fri, 25 Sep 2020 16:20:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8ACAA20809 for ; Fri, 25 Sep 2020 16:20:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LzuWru2r" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729284AbgIYQUS (ORCPT ); Fri, 25 Sep 2020 12:20:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728861AbgIYQUR (ORCPT ); Fri, 25 Sep 2020 12:20:17 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A53CDC0613CE for ; Fri, 25 Sep 2020 09:20:17 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b3so3083901ybg.23 for ; Fri, 25 Sep 2020 09:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=QKY9oJOaRIHpAGVT0DQuniaD4UoON0riEenGF0pOJvE=; b=LzuWru2rfMWXjmQhITR84ElJysOjN8K6nOXusZNxg902X24cz3ek/FtDMBKt9FSgXa ZTBhnR24rqBM50IghYU7zJnm9KzBOQmbKntt5P+mbWv3BrHeod0kutZeSPIqIkxB1uTx /FwKKfmyePEROjnkpa3WzP+nLyKuWUdFtC62TNZDrb5ToIArEWZAa0fa3R0HrPHIAYZy GQ4kTQq1ZnqoVtUdakFwuovVXbeakzakMGgwiNVlJinW4NO8aCgBS6jSRGBoX0aKumzk mInyoKXJSegCOWER9LsKU75awAEDxE1JZ4sLrx9CgD74QjHiikp9ZhQ7vNjPpTdh0baT 90lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QKY9oJOaRIHpAGVT0DQuniaD4UoON0riEenGF0pOJvE=; b=FVSK72Izl9UgszRjKSXgQseVa44vzeE8ip8aD2Yg2Kc53eIsZAlJPEbCSdp/mmLaHF MioVC5BCfT5+oz9kadnIcg02tl9AVitsYzkKlok6h4+VOxeU2nshFyG8mGVe1IFtJwEh Q16cvXgqZmRm8p05Rt7MSQBiGQV+/qfj4F3ESMSzGq9t0jI6t40q6q6SCqJLbi+8Q8vq 7B0x9Uzq0JiArdq5h/3p/XuwIaa18c7ZHaiMti8p40blgcsA2fFOAsU+zrgxFA0d5TvG n2wI5kYOdRidza/YnBHO29XAOFQT474jYzxiCNZ42mdql25dK3by7j3CqY7vWDvnStN/ qyJA== X-Gm-Message-State: AOAM531H77n5h53TytvfFo6d2EUQ9GM/d1x5dF2tGP1NF5Uj+VTurXtg 8onrZWyYh2TuYnalL7+xvRCJkd5MVwPvEog69P/3xdvdJc/PDz+X3TfNAWH8O5z6QOPqoBdvK+J KZ8+Stlp9iM+TIKiD9knDPehkxJYqlE1F0uOBU84l756cGE4DY09rRO02Lga/wQ== X-Google-Smtp-Source: ABdhPJyMo4w6b7rnuGWNqdSRQpB1KWfKzeALhjlus7QESb3CBHfqdp1BXMN8hB8yfKQnLVHq3rCi9VQulAg= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a25:df8d:: with SMTP id w135mr6656059ybg.194.1601050816800; Fri, 25 Sep 2020 09:20:16 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:01 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-16-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 15/30 for 5.4] dma-direct: consolidate the error handling in dma_direct_alloc_pages From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Robin Murphy Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream 3d0fc341c4bb66b2c41c0d1ec954a6d300e100b7 commit. Use a goto label to merge two error return cases. Signed-off-by: Christoph Hellwig Reviewed-by: Robin Murphy Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 210ea469028c..5343afbb8af3 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -203,11 +203,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - + if (!ret) + goto out_free_pages; memset(ret, 0, size); goto done; } @@ -220,8 +217,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - dma_free_contiguous(dev, page, size); - return NULL; + goto out_free_pages; } ret = page_address(page); @@ -241,6 +237,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; +out_free_pages: + dma_free_contiguous(dev, page, size); + return NULL; } void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, From patchwork Fri Sep 25 16:19:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9660C4741F for ; Fri, 25 Sep 2020 16:20:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98BB02076B for ; Fri, 25 Sep 2020 16:20:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q8jszLV7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729293AbgIYQUU (ORCPT ); Fri, 25 Sep 2020 12:20:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729286AbgIYQUT (ORCPT ); Fri, 25 Sep 2020 12:20:19 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 753A3C0613CE for ; Fri, 25 Sep 2020 09:20:19 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id m13so2354615qtu.10 for ; Fri, 25 Sep 2020 09:20:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=+dNbpiQ5L14ez4XVhQTrLcCCsLvYad4nTbVsU0C8cik=; b=Q8jszLV7WEYsNMuVG7Dp6Xz3MqO3DuwPulgj4OvuJtRlXBflDnK46wBuYytXqT/ylc CFfXnZ2GH7tXNpB6sV0kiT3iR7enYP/AqKxRXVnC5jE/x5x7yK3zhE+PG26iHdVBpaqI 1n8i3ztrhzvkd736RKyq1e1Won37ErUVqr5s1OokBTFe/32MnPhhLAn6S0lQ5uYzxheO c3+eNG8YgcTxR74TPNLcL+KQGhHM7ya49PGDQ5ep0mOaJOrWycSgbZ3FNcHOjIAtx1by giXTEO7Q7Dc15JmlpWcs/Jeh5Hsm9eUdSIX3b7zWMCa9Q1oAB1yVGt8QG2ELTKV3RnUU uAyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+dNbpiQ5L14ez4XVhQTrLcCCsLvYad4nTbVsU0C8cik=; b=Ry1WlfqWjNqX71l0pD0klXn3WUJLLPRwhbyK0FXGT3UPn3a0/lGNdOEJn2SayvFXsx FG0fXrE1+zQcuGX0/L5PnbFBK5k+fSD6D0xCGd2NaSGUfL0e/VsH3X8gy7wghep26XhP lsbirEHJEEZBnsqz/arFd8g5ZsoViQtcDMyu0tm7fpTAoiS/zc/WCwYzRi2OXZ4jqqOI ivzkw5a4TtnnvHtC41U905yAPVVxrxK98Nfw/cxXO7K7I+q/lq/azfcdtHyAs4V/gRiJ 8IGihoee9Pnjo6kD4+bL0ZLw+Sn33cZRb7T7QIylrHaVYn+FluyWpMpac0umAkYFivpZ RoyA== X-Gm-Message-State: AOAM531km7PB7wAKdDSijOveC7hZneLsjqjbV59HdP4kkyBVplisO3kW R576ekrjwW3AJF6gejNpi7Ly6m2+ezzMwqMG0XhOpXlPRrfO+SQMbw9dMLRi477JTQYo6GaGETZ 7gU55Jl13dNyf+sjv3Q3EDq0YawuflCJLFLh4swQAOltkuejA798wWBheERnAWA== X-Google-Smtp-Source: ABdhPJxPKp4bvCTLxPp/uPhj7VHWXXmJk3HBehT1iQ+3ZqU29Rh0iuvG4MOTMi3GoaMdz1OhIDE7c1Xzeuk= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a0c:8c4c:: with SMTP id o12mr98400qvb.46.1601050818500; Fri, 25 Sep 2020 09:20:18 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:02 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-17-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 16/30 for 5.4] xtensa: use the generic uncached segment support From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Max Filippov Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream 0f665b9e2a6d4cc963e6cd349d40320ed5281f95 commit. Switch xtensa over to use the generic uncached support, and thus the generic implementations of dma_alloc_* and dma_alloc_*, which also gains support for mmaping DMA memory. The non-working nommu DMA support has been disabled, but could be re-enabled easily if platforms that actually have an uncached segment show up. Signed-off-by: Christoph Hellwig Reviewed-by: Max Filippov Tested-by: Max Filippov Signed-off-by: Peter Gonda --- arch/xtensa/Kconfig | 6 +- arch/xtensa/include/asm/platform.h | 27 ------- arch/xtensa/kernel/Makefile | 3 +- arch/xtensa/kernel/pci-dma.c | 121 +++-------------------------- 4 files changed, 18 insertions(+), 139 deletions(-) diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index 8352037322df..d3a5891eff2e 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -3,8 +3,10 @@ config XTENSA def_bool y select ARCH_32BIT_OFF_T select ARCH_HAS_BINFMT_FLAT if !MMU - select ARCH_HAS_SYNC_DMA_FOR_CPU - select ARCH_HAS_SYNC_DMA_FOR_DEVICE + select ARCH_HAS_DMA_PREP_COHERENT if MMU + select ARCH_HAS_SYNC_DMA_FOR_CPU if MMU + select ARCH_HAS_SYNC_DMA_FOR_DEVICE if MMU + select ARCH_HAS_UNCACHED_SEGMENT if MMU select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_WANT_FRAME_POINTERS diff --git a/arch/xtensa/include/asm/platform.h b/arch/xtensa/include/asm/platform.h index 913826dfa838..f2c48522c5a1 100644 --- a/arch/xtensa/include/asm/platform.h +++ b/arch/xtensa/include/asm/platform.h @@ -65,31 +65,4 @@ extern void platform_calibrate_ccount (void); */ void cpu_reset(void) __attribute__((noreturn)); -/* - * Memory caching is platform-dependent in noMMU xtensa configurations. - * The following set of functions should be implemented in platform code - * in order to enable coherent DMA memory operations when CONFIG_MMU is not - * enabled. Default implementations do nothing and issue a warning. - */ - -/* - * Check whether p points to a cached memory. - */ -bool platform_vaddr_cached(const void *p); - -/* - * Check whether p points to an uncached memory. - */ -bool platform_vaddr_uncached(const void *p); - -/* - * Return pointer to an uncached view of the cached sddress p. - */ -void *platform_vaddr_to_uncached(void *p); - -/* - * Return pointer to a cached view of the uncached sddress p. - */ -void *platform_vaddr_to_cached(void *p); - #endif /* _XTENSA_PLATFORM_H */ diff --git a/arch/xtensa/kernel/Makefile b/arch/xtensa/kernel/Makefile index 6f629027ac7d..d4082c6a121b 100644 --- a/arch/xtensa/kernel/Makefile +++ b/arch/xtensa/kernel/Makefile @@ -5,10 +5,11 @@ extra-y := head.o vmlinux.lds -obj-y := align.o coprocessor.o entry.o irq.o pci-dma.o platform.o process.o \ +obj-y := align.o coprocessor.o entry.o irq.o platform.o process.o \ ptrace.o setup.o signal.o stacktrace.o syscall.o time.o traps.o \ vectors.o +obj-$(CONFIG_MMU) += pci-dma.o obj-$(CONFIG_PCI) += pci.o obj-$(CONFIG_MODULES) += xtensa_ksyms.o module.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 154979d62b73..1c82e21de4f6 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -81,122 +81,25 @@ void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, } } -#ifdef CONFIG_MMU -bool platform_vaddr_cached(const void *p) -{ - unsigned long addr = (unsigned long)p; - - return addr >= XCHAL_KSEG_CACHED_VADDR && - addr - XCHAL_KSEG_CACHED_VADDR < XCHAL_KSEG_SIZE; -} - -bool platform_vaddr_uncached(const void *p) -{ - unsigned long addr = (unsigned long)p; - - return addr >= XCHAL_KSEG_BYPASS_VADDR && - addr - XCHAL_KSEG_BYPASS_VADDR < XCHAL_KSEG_SIZE; -} - -void *platform_vaddr_to_uncached(void *p) -{ - return p + XCHAL_KSEG_BYPASS_VADDR - XCHAL_KSEG_CACHED_VADDR; -} - -void *platform_vaddr_to_cached(void *p) -{ - return p + XCHAL_KSEG_CACHED_VADDR - XCHAL_KSEG_BYPASS_VADDR; -} -#else -bool __attribute__((weak)) platform_vaddr_cached(const void *p) -{ - WARN_ONCE(1, "Default %s implementation is used\n", __func__); - return true; -} - -bool __attribute__((weak)) platform_vaddr_uncached(const void *p) -{ - WARN_ONCE(1, "Default %s implementation is used\n", __func__); - return false; -} - -void __attribute__((weak)) *platform_vaddr_to_uncached(void *p) +void arch_dma_prep_coherent(struct page *page, size_t size) { - WARN_ONCE(1, "Default %s implementation is used\n", __func__); - return p; -} - -void __attribute__((weak)) *platform_vaddr_to_cached(void *p) -{ - WARN_ONCE(1, "Default %s implementation is used\n", __func__); - return p; + __invalidate_dcache_range((unsigned long)page_address(page), size); } -#endif /* - * Note: We assume that the full memory space is always mapped to 'kseg' - * Otherwise we have to use page attributes (not implemented). + * Memory caching is platform-dependent in noMMU xtensa configurations. + * The following two functions should be implemented in platform code + * in order to enable coherent DMA memory operations when CONFIG_MMU is not + * enabled. */ - -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, - gfp_t flag, unsigned long attrs) -{ - unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; - struct page *page = NULL; - - /* ignore region speicifiers */ - - flag &= ~(__GFP_DMA | __GFP_HIGHMEM); - - if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff)) - flag |= GFP_DMA; - - if (gfpflags_allow_blocking(flag)) - page = dma_alloc_from_contiguous(dev, count, get_order(size), - flag & __GFP_NOWARN); - - if (!page) - page = alloc_pages(flag | __GFP_ZERO, get_order(size)); - - if (!page) - return NULL; - - *handle = phys_to_dma(dev, page_to_phys(page)); - #ifdef CONFIG_MMU - if (PageHighMem(page)) { - void *p; - - p = dma_common_contiguous_remap(page, size, - pgprot_noncached(PAGE_KERNEL), - __builtin_return_address(0)); - if (!p) { - if (!dma_release_from_contiguous(dev, page, count)) - __free_pages(page, get_order(size)); - } - return p; - } -#endif - BUG_ON(!platform_vaddr_cached(page_address(page))); - __invalidate_dcache_range((unsigned long)page_address(page), size); - return platform_vaddr_to_uncached(page_address(page)); +void *uncached_kernel_address(void *p) +{ + return p + XCHAL_KSEG_BYPASS_VADDR - XCHAL_KSEG_CACHED_VADDR; } -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) +void *cached_kernel_address(void *p) { - unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; - struct page *page; - - if (platform_vaddr_uncached(vaddr)) { - page = virt_to_page(platform_vaddr_to_cached(vaddr)); - } else { -#ifdef CONFIG_MMU - dma_common_free_remap(vaddr, size); -#endif - page = pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_handle))); - } - - if (!dma_release_from_contiguous(dev, page, count)) - __free_pages(page, get_order(size)); + return p + XCHAL_KSEG_CACHED_VADDR - XCHAL_KSEG_BYPASS_VADDR; } +#endif /* CONFIG_MMU */ From patchwork Fri Sep 25 16:19:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62297C47420 for ; Fri, 25 Sep 2020 16:20:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 263FA2076B for ; Fri, 25 Sep 2020 16:20:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sasmhY96" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729592AbgIYQUV (ORCPT ); Fri, 25 Sep 2020 12:20:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729286AbgIYQUV (ORCPT ); Fri, 25 Sep 2020 12:20:21 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1765DC0613CE for ; Fri, 25 Sep 2020 09:20:21 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id f4so1997352qvw.15 for ; Fri, 25 Sep 2020 09:20:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=7STjK9+TpwPFw0qxRDj2vZ4hmRtmYuob5ieMgtRjCx0=; b=sasmhY96/XbKRVYBY5MtAFtnXPtLJFTnt0XF7tn1dOVfJzWV7YRJ9ghf7y0LXhVvIq mxFWRz1yAM3xEAVnKpM0fOCXUjVmuaFSMT8G4xACgbU0g1txGzKqJdlLXaWJW1aDQ2+y MYahJgD5scdt8gmji9f9spX7dGicyVk+L7u+Z736M2XoDQ84M5wEOBEPSUaqp8cYCpBY HZ00/rAPjaaxaokcNpjhdFfu4V//CsGrw1SQ+TRho+bJlcVVjn75ZgFouFmWV/kZDQ+8 XHNF/inqx1GUy0qqLiPaO1C9JvyAQcOwX/GDRD7nsVAW70+I/f0YYNkIB3qkfaH0GC97 sVtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7STjK9+TpwPFw0qxRDj2vZ4hmRtmYuob5ieMgtRjCx0=; b=KGP8NG2Sue9DV3kPt/xkUcDd2FjvEYoJwutvXtazBy+k1xge9zVpKcv2/6taW6n4iK e3EGVlDsyjPLvMP3xf3NpNIvDieOgS33JqFQ0OqyVG1WgeywMWQjiAkK5vmDHVe03/rq 1ugl5NwIKL1QJtGytmBEwbGF5BeK5WjUvrYleHk/zLYgyeq1nvOaJ6LzXWZWpyecyfMk tghsELj6F63p4fXlcFLXUab3CUhhsPhe4ij9DlK9vLAXugk5Tue/XFvZ//rpMhMghqBI SBQnoBTlYj1H+3oIF3x++kwcJRgGX5UqoAaBkgz3J2TDHhAchW08c1OGD18cy9qI+beq 6exA== X-Gm-Message-State: AOAM530ZZBZXxaST8ivxgOUi81wrVwd2HP56Qv08PW4eIsVhrx20pDYV Pd8W6VlYLDsZbw3GDFBSqzr93Z7WNE+yqOnZm62UYa718MjEsBXngZ21gP/Puufwtj3XdeF6qGe clsfNNndaohPCBPqOnUqyHUX6INrkaoEFyxaVob4n3c+HtzunSXW7Koh8I/UOXg== X-Google-Smtp-Source: ABdhPJzVsvxEgTykAFcPVeDgcH77LAJXa8Tray6HYRPELQXCvlC7vy9yT0YHimJMULgvQeWVXFkCF/zembE= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a05:6214:1225:: with SMTP id p5mr163771qvv.29.1601050820103; Fri, 25 Sep 2020 09:20:20 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:03 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-18-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 17/30 for 5.4] dma-direct: make uncached_kernel_address more general From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Robin Murphy Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream fa7e2247c5729f990c7456fe09f3af99c8f2571b commit. Rename the symbol to arch_dma_set_uncached, and pass a size to it as well as allow an error return. That will allow reusing this hook for in-place pagetable remapping. As the in-place remap doesn't always require an explicit cache flush, also detangle ARCH_HAS_DMA_PREP_COHERENT from ARCH_HAS_DMA_SET_UNCACHED. Signed-off-by: Christoph Hellwig Reviewed-by: Robin Murphy Change-Id: I69aaa5ee5f73547aaf4de8fb0a57494709fa5eb5 Signed-off-by: Peter Gonda --- arch/Kconfig | 8 ++++---- arch/microblaze/Kconfig | 2 +- arch/microblaze/mm/consistent.c | 2 +- arch/mips/Kconfig | 3 ++- arch/mips/mm/dma-noncoherent.c | 2 +- arch/nios2/Kconfig | 3 ++- arch/nios2/mm/dma-mapping.c | 2 +- arch/xtensa/Kconfig | 2 +- arch/xtensa/kernel/pci-dma.c | 2 +- include/linux/dma-noncoherent.h | 2 +- kernel/dma/direct.c | 10 ++++++---- 11 files changed, 21 insertions(+), 17 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 238dccfa7691..38b6e74750fc 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -248,11 +248,11 @@ config ARCH_HAS_SET_DIRECT_MAP bool # -# Select if arch has an uncached kernel segment and provides the -# uncached_kernel_address / cached_kernel_address symbols to use it +# Select if the architecture provides the arch_dma_set_uncached symbol to +# either provide an uncached segement alias for a DMA allocation, or +# to remap the page tables in place. # -config ARCH_HAS_UNCACHED_SEGMENT - select ARCH_HAS_DMA_PREP_COHERENT +config ARCH_HAS_DMA_SET_UNCACHED bool # Select if arch init_task must go in the __init_task_data section diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig index 261c26df1c9f..2bdb3ceb525d 100644 --- a/arch/microblaze/Kconfig +++ b/arch/microblaze/Kconfig @@ -8,7 +8,7 @@ config MICROBLAZE select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE - select ARCH_HAS_UNCACHED_SEGMENT if !MMU + select ARCH_HAS_DMA_SET_UNCACHED if !MMU select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_WANT_IPC_PARSE_VERSION select BUILDTIME_EXTABLE_SORT diff --git a/arch/microblaze/mm/consistent.c b/arch/microblaze/mm/consistent.c index 8c5f0c332d8b..457581fb74cc 100644 --- a/arch/microblaze/mm/consistent.c +++ b/arch/microblaze/mm/consistent.c @@ -40,7 +40,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) #define UNCACHED_SHADOW_MASK 0 #endif /* CONFIG_XILINX_UNCACHED_SHADOW */ -void *uncached_kernel_address(void *ptr) +void *arch_dma_set_uncached(void *ptr, size_t size) { unsigned long addr = (unsigned long)ptr; diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index c1c3da4fc667..ab98d8bad08e 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -1132,8 +1132,9 @@ config DMA_NONCOHERENT # significant advantages. # select ARCH_HAS_DMA_WRITE_COMBINE + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_SYNC_DMA_FOR_DEVICE - select ARCH_HAS_UNCACHED_SEGMENT + select ARCH_HAS_DMA_SET_UNCACHED select DMA_NONCOHERENT_MMAP select DMA_NONCOHERENT_CACHE_SYNC select NEED_DMA_MAP_STATE diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c index fcf6d3eaac66..d71b947a2121 100644 --- a/arch/mips/mm/dma-noncoherent.c +++ b/arch/mips/mm/dma-noncoherent.c @@ -49,7 +49,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) dma_cache_wback_inv((unsigned long)page_address(page), size); } -void *uncached_kernel_address(void *addr) +void *arch_dma_set_uncached(void *addr, size_t size) { return (void *)(__pa(addr) + UNCAC_BASE); } diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig index 44b5da37e8bd..2fc4ed210b5f 100644 --- a/arch/nios2/Kconfig +++ b/arch/nios2/Kconfig @@ -2,9 +2,10 @@ config NIOS2 def_bool y select ARCH_32BIT_OFF_T + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE - select ARCH_HAS_UNCACHED_SEGMENT + select ARCH_HAS_DMA_SET_UNCACHED select ARCH_NO_SWAP select TIMER_OF select GENERIC_ATOMIC64 diff --git a/arch/nios2/mm/dma-mapping.c b/arch/nios2/mm/dma-mapping.c index 9cb238664584..19f6d6b394e6 100644 --- a/arch/nios2/mm/dma-mapping.c +++ b/arch/nios2/mm/dma-mapping.c @@ -67,7 +67,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) flush_dcache_range(start, start + size); } -void *uncached_kernel_address(void *ptr) +void *arch_dma_set_uncached(void *ptr, size_t size) { unsigned long addr = (unsigned long)ptr; diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index d3a5891eff2e..75bc567c5c10 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -6,7 +6,7 @@ config XTENSA select ARCH_HAS_DMA_PREP_COHERENT if MMU select ARCH_HAS_SYNC_DMA_FOR_CPU if MMU select ARCH_HAS_SYNC_DMA_FOR_DEVICE if MMU - select ARCH_HAS_UNCACHED_SEGMENT if MMU + select ARCH_HAS_DMA_SET_UNCACHED if MMU select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_WANT_FRAME_POINTERS diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 1c82e21de4f6..d704eb67867c 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -93,7 +93,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) * enabled. */ #ifdef CONFIG_MMU -void *uncached_kernel_address(void *p) +void *arch_dma_set_uncached(void *p, size_t size) { return p + XCHAL_KSEG_BYPASS_VADDR - XCHAL_KSEG_CACHED_VADDR; } diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e30fca1f1b12..dc6ddbb26846 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -108,7 +108,7 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size) } #endif /* CONFIG_ARCH_HAS_DMA_PREP_COHERENT */ -void *uncached_kernel_address(void *addr); +void *arch_dma_set_uncached(void *addr, size_t size); void *cached_kernel_address(void *addr); #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 5343afbb8af3..bb5cb5af9f7d 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -226,10 +226,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); - ret = uncached_kernel_address(ret); + ret = arch_dma_set_uncached(ret, size); + if (IS_ERR(ret)) + goto out_free_pages; } done: if (force_dma_unencrypted(dev)) @@ -271,7 +273,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); @@ -281,7 +283,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); From patchwork Fri Sep 25 16:19:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6947C47427 for ; Fri, 25 Sep 2020 16:20:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A21C120809 for ; Fri, 25 Sep 2020 16:20:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PSP6OEpy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729286AbgIYQUX (ORCPT ); Fri, 25 Sep 2020 12:20:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUW (ORCPT ); Fri, 25 Sep 2020 12:20:22 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7114C0613CE for ; Fri, 25 Sep 2020 09:20:22 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o11so2419624pjj.9 for ; Fri, 25 Sep 2020 09:20:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=f3bkGq4TDjR+hhspYidSvBzq+erXc7WyryfUsbbYVGg=; b=PSP6OEpytCgvmeBoQZXNKwHkXviBLv70Okv1Cd891H6sI/LM+x/oux062RtgPpnsBs v1hbuASSSl63ZV3ukmHCUtvjO7fnDpkn8HpS6cs5nMgm3edx/T6f3DcvKFdj/yOga63i IyIkSngx2SrHsdIrwI8su7+tehmiTy/P4HRiC+PbMd49WHj7tVBK9UPx9fh8kIHkPeP3 CMZUft6uIxa+oqI3YPN99bbFoqR775hdqtIqqyYv2Q8su/QDmibS2TEx7LsoZamc5A9W B0MF45ZIVKJyOZ4WyDabUAtnf659q2Or7SSWUQX/Q3JTKZOhblCluHiQOcj6/H6XedjM yQ2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=f3bkGq4TDjR+hhspYidSvBzq+erXc7WyryfUsbbYVGg=; b=A/nRCujjMDS43LNDXPNCPvY2lCwocLw3JeW0C1ZxyQeyX7CxwRQwIvTrhn/GSCH6DD kyWx+/0bokDcvHyKF2XX32BlsuLnnm3IyyotWaJ0Bx9zEMn9FG6X4K+byyC8JxLWd09y VF/nLwyKAWJ88xC44XeTNrnNpVlWFgd6JHeCdZ5UirTomb/OzQEh05tRGnOQfcDwpmAR tLiB8hb9NAPVtjYr85rQ/LOHcoNFJGI1Om+Hc6FBJbRXbTRlGr70sAyu1ydNEawcal6r wT7JW7X2iseuL+pzsCjepSIp86HY/g/F8ZiBcUMvU9wckObOaLft7lTWa3JzUI7IZAgy raDQ== X-Gm-Message-State: AOAM5304vma7+21bG3JrPlK5tbDnitrq3FTZL5n6MioKOT7X69dlZC+A Mwv/D/B8nnlEqV3mj2nQvJNmPYSPpnS7bg6uOzOMdBlWCWHQMqFNiB2aCspDsrw4WUmJ6SIhWco HdgwEcniv/BQzKoXhwjIbhOqfirEuNgxDf8uEepp/pmRmdPr5xSaQRbDmg7XEYg== X-Google-Smtp-Source: ABdhPJzF90ZOuqYIYaFmgBVGj1KR7S1uz22QZ0yVy78JoWaocfqFzpCjYRPL/mv2bU+BOiO67CEipIADUSw= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a63:2319:: with SMTP id j25mr680949pgj.75.1601050822067; Fri, 25 Sep 2020 09:20:22 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:04 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-19-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 18/30 for 5.4] dma-direct: always align allocation size in dma_direct_alloc_pages() From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 633d5fce78a61e8727674467944939f55b0bcfab commit. dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted() works at page granularity. It's necessary to page align the allocation size in dma_direct_alloc_pages() for consistent behavior. This also fixes an issue when arch_dma_prep_coherent() is called on an unaligned allocation size for dma_alloc_need_uncached() when CONFIG_DMA_DIRECT_REMAP is disabled but CONFIG_ARCH_HAS_DMA_SET_UNCACHED is enabled. Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Change-Id: I6ede6ca2864a9fb3ace42df7a0da6725ae453f1c Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index bb5cb5af9f7d..e72bb0dc8150 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -126,11 +126,12 @@ static inline bool dma_should_free_from_pool(struct device *dev, struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp, unsigned long attrs) { - size_t alloc_size = PAGE_ALIGN(size); int node = dev_to_node(dev); struct page *page = NULL; u64 phys_mask; + WARN_ON_ONCE(!PAGE_ALIGNED(size)); + if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; @@ -138,14 +139,14 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp &= ~__GFP_ZERO; gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, &phys_mask); - page = dma_alloc_contiguous(dev, alloc_size, gfp); + page = dma_alloc_contiguous(dev, size, gfp); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { - dma_free_contiguous(dev, page, alloc_size); + dma_free_contiguous(dev, page, size); page = NULL; } again: if (!page) - page = alloc_pages_node(node, gfp, get_order(alloc_size)); + page = alloc_pages_node(node, gfp, get_order(size)); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { dma_free_contiguous(dev, page, size); page = NULL; @@ -172,8 +173,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, struct page *page; void *ret; + size = PAGE_ALIGN(size); + if (dma_should_alloc_from_pool(dev, gfp, attrs)) { - ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp); + ret = dma_alloc_from_pool(dev, size, &page, gfp); if (!ret) return NULL; goto done; @@ -197,10 +200,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, dma_alloc_need_uncached(dev, attrs)) || (IS_ENABLED(CONFIG_DMA_REMAP) && PageHighMem(page))) { /* remove any dirty cache lines on the kernel alias */ - arch_dma_prep_coherent(page, PAGE_ALIGN(size)); + arch_dma_prep_coherent(page, size); /* create a coherent mapping */ - ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), + ret = dma_common_contiguous_remap(page, size, dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); if (!ret) From patchwork Fri Sep 25 16:19:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A20E2C4363D for ; Fri, 25 Sep 2020 16:20:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6558220809 for ; Fri, 25 Sep 2020 16:20:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K+L7eZko" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729593AbgIYQU0 (ORCPT ); Fri, 25 Sep 2020 12:20:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUZ (ORCPT ); Fri, 25 Sep 2020 12:20:25 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83B30C0613CE for ; Fri, 25 Sep 2020 09:20:24 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id gc24so2442371pjb.4 for ; Fri, 25 Sep 2020 09:20:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ghrWx1K/+3JsMeSH86tJN8ucLNnB+daS1gHHtcEdzaM=; b=K+L7eZkoluS2PEadVJ6NwP0CoA7cdgFNv9bez0LNA03iLRVn+MGMShytCEvDqPhFQp WgVUiBhftJ7Qvc6XDaLySXYX+PQBcE+RkPvukG/tMuRLF1NzfuESnwlWrvilA3RhcREy iUuGFaAxVs25sK/96yJj9E4w8DGFEypYGzKCl/xjOvU7Xyac+KeqPbsWzlLlmebMWsV6 2FAtAGzqzxahjb1AcpetOxRaw4sQfqEWNFV7hsouFxaeTnE5r5xYqYOSYxuoHiNMCyPA 4JZ2FoSLRr4m+gRwxDzbAaUOL6tPD881Lr6WuYMN+ZFVJ7dvJombwnvwt+5tSkHhhK6q hAtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ghrWx1K/+3JsMeSH86tJN8ucLNnB+daS1gHHtcEdzaM=; b=iN8ULjUpGHm3k8BL+7HVJAk6Tu/wFARH3xjXfvLTcQc9UVNx/2m7jafGWPyNg/Mgme x5fSYOlH0/6yh3diA4/Fspno6FHV6Iq+PgwSB9jJh+5dUeE+cSwu9tVDU7dCDiiDguxM +ybDGp/kw0VkY7CXPP42QSXI5bNp6tm4LpGYl3XCawG1sCFxgP4izmWY5o5uKnTK7xVn JhTinIZa68XGE8NsLZWwocHlIRyE2JhpD/wtKPMtcHF7RNHdKCvUbPO8Wy4ZfWttj+sV W9IThZv0JY1Mn2F8ozZNU34lr0sC/nwN/2WXzs2D+MVlBHHcR1YTKkFIYAH7VqNpCevm 4/2g== X-Gm-Message-State: AOAM533nD1wI5ikV176uXrUdLCJ93mzlrC09XOmwDi7/+qviJGYV8ESk yUYLCGmdvRSHOmR+MH0uRBRaPjRFDhvzSPC+jBYZY0+ew8Hp8K7zUTZ+l13WYnh8nwFu/VqfG9e xObpAEt7e79q8DijF62DD3xMKs6aU1kHbhvzSncwL43kQgzzjFmTnOtChx4fU9A== X-Google-Smtp-Source: ABdhPJw6giJceBWOBmp4Y9P/6acvyjC3ta60pOH7uZ41/gE28XMuwLTNO1hjBzopq0Vrjtz8+rAp589VnGI= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:902:8682:b029:d1:f289:1b8a with SMTP id g2-20020a1709028682b02900d1f2891b8amr132753plo.69.1601050823899; Fri, 25 Sep 2020 09:20:23 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:05 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-20-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 19/30 for 5.4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 96a539fa3bb71f443ae08e57b9f63d6e5bb2207c commit. If arch_dma_set_uncached() fails after memory has been decrypted, it needs to be re-encrypted before freeing. Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more general") Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e72bb0dc8150..b4a5b7076399 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -234,7 +234,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, arch_dma_prep_coherent(page, size); ret = arch_dma_set_uncached(ret, size); if (IS_ERR(ret)) - goto out_free_pages; + goto out_encrypt_pages; } done: if (force_dma_unencrypted(dev)) @@ -242,6 +242,11 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; + +out_encrypt_pages: + if (force_dma_unencrypted(dev)) + set_memory_encrypted((unsigned long)page_address(page), + 1 << get_order(size)); out_free_pages: dma_free_contiguous(dev, page, size); return NULL; From patchwork Fri Sep 25 16:19:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B57CC47426 for ; Fri, 25 Sep 2020 16:20:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBBE120809 for ; Fri, 25 Sep 2020 16:20:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UigxdEL/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729594AbgIYQU0 (ORCPT ); Fri, 25 Sep 2020 12:20:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQU0 (ORCPT ); Fri, 25 Sep 2020 12:20:26 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34B4DC0613CE for ; Fri, 25 Sep 2020 09:20:26 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id e13so2681326pgk.6 for ; Fri, 25 Sep 2020 09:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=e06FdFQ0PIqy4dZ4zt926BnYxEIbdPgCQlSxDBI9oMo=; b=UigxdEL/247+XP6z01JfIl3kCMca2t9gihOV8zP1Sdef1/e2s/aSDK3QxSgbsWEdOQ vXBZ4S1nslSec2e3lu5pzxmaz9JdGpO2HETd2uIwFfFkBLIS8TYoIQ/fVkslsqYCb132 84wnbE7sbkHq3wB3nTyCeA1vAqtV3jH5csppVQgd2UBHMnxYb0/dq6iGGkNa+1lD2DVS wmWWUupXnnr6ZC8dzTjBUbyLGzLieskVuP/XWxxAqeSYQygc4OAAVZo3t2n9c5gg4L7T wzTnFSdoC9w1TCyAf+BlYBC3lXk4YMZjLHZCaStmFF2znIEnnm9FhGSE3Va2qtZfA01b sMJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=e06FdFQ0PIqy4dZ4zt926BnYxEIbdPgCQlSxDBI9oMo=; b=Pz8SgQ3xvJRYgMk/VlQbl5xQ4hMrplo6t2NAQrm0WG55Dp/JXiOEbyJVINAfrsFp93 jIdXoIH7xdwPMHMwIh8yO57E4WivTk6U3ra4Zi1YaBP/ZKelBhMsppxux5eD35O9cXjf 5Ea4ETQ6c+ARSF0DOSLc7B+S++AVZLTQcRgdxjEQJh3VuuKdXiY1eRZD3Mq2l5qVoyjG PdqmX2YvvVOO4ynQ5oWc6OI4RVJCHdcWKne4R8OXVK7B/Be4zNGxShNkw+1QruHJv17h 3Glycv20AcHOc4lRjESf6B3/spjNpk2xDNzjdOyQXanT+Tr3bXy1dYdSD5/ME5X2T1is dicg== X-Gm-Message-State: AOAM530L0hGcsuckIabOBWoH/mnjvM0okPQwISmmbrkRXVWcFJM5d+ef W9AhvTZbtsk8g98O8W1jMXru6CAEBPNTWs2rWjViWHi5nMUrNzjar5TbHXwnx3OjuvnFxO2WZKH k/4Py1KvUlF431Bhd33ZqRZr/vi7pCv+yTOmdTYPYEDv1VXBZmZ8MbGpu/jNaQA== X-Google-Smtp-Source: ABdhPJzXA8j6nDJO4zrI8c03gXGiqZo9Z/GSkTyRV0nnwKVkpk0629xDjv0B8py/lxGQOUSa0S7BhSB/7/k= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:90a:8a82:: with SMTP id x2mr323799pjn.177.1601050825530; Fri, 25 Sep 2020 09:20:25 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:06 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-21-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 20/30 for 5.4] dma-direct: check return value when encrypting or decrypting memory From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 56fccf21d1961a06e2a0c96ce446ebf036651062 commit. __change_page_attr() can fail which will cause set_memory_encrypted() and set_memory_decrypted() to return non-zero. If the device requires unencrypted DMA memory and decryption fails, simply free the memory and fail. If attempting to re-encrypt in the failure path and that encryption fails, there is no alternative other than to leak the memory. Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code") Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b4a5b7076399..ac611b4f65b9 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -172,6 +172,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, { struct page *page; void *ret; + int err; size = PAGE_ALIGN(size); @@ -224,8 +225,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, } ret = page_address(page); - if (force_dma_unencrypted(dev)) - set_memory_decrypted((unsigned long)ret, 1 << get_order(size)); + if (force_dma_unencrypted(dev)) { + err = set_memory_decrypted((unsigned long)ret, + 1 << get_order(size)); + if (err) + goto out_free_pages; + } memset(ret, 0, size); @@ -244,9 +249,13 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, return ret; out_encrypt_pages: - if (force_dma_unencrypted(dev)) - set_memory_encrypted((unsigned long)page_address(page), - 1 << get_order(size)); + if (force_dma_unencrypted(dev)) { + err = set_memory_encrypted((unsigned long)page_address(page), + 1 << get_order(size)); + /* If memory cannot be re-encrypted, it must be leaked */ + if (err) + return NULL; + } out_free_pages: dma_free_contiguous(dev, page, size); return NULL; From patchwork Fri Sep 25 16:19:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1708C47425 for ; Fri, 25 Sep 2020 16:20:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9DE92076B for ; Fri, 25 Sep 2020 16:20:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MUQayPXo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728907AbgIYQU2 (ORCPT ); Fri, 25 Sep 2020 12:20:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQU2 (ORCPT ); Fri, 25 Sep 2020 12:20:28 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FA52C0613CE for ; Fri, 25 Sep 2020 09:20:28 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id k13so2745377pfh.4 for ; Fri, 25 Sep 2020 09:20:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=xaiNLmruWZkYXJuAm2uivOeCM7fHk3EZ3wARxXweSRk=; b=MUQayPXoYvKrDXe5x9e2ft7TOuXR+lpZrTdLpXXivLk9Cr5ZQfDkRunAiBtufF4esR bJX+JjlJuiZZ5RUbHb+BHU7dRjVAQ7AkTlWYI1VxaIy9TFtfUiGQT01sYk5m1SuQI3gg Z1Bcn1XtM/kXk28wFarK1U589E9eGxiMey1COIjdDfGzqAiL1qnLMD3tC8yNgPkB1gWN keiV5qwgteJoGwt7lo4e6O7kMUhhfj0CUgZ1+ExnTj5xckR9tsi8SFon/QQrn/Xzo4ay op/ZqUy1hrNqc+MY6FYf5LRyR+CumoAvB3yhrUqcXJqt1TlCyVfv/Au6IOR9IF52Hd7f anAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xaiNLmruWZkYXJuAm2uivOeCM7fHk3EZ3wARxXweSRk=; b=G9TYCZ5k4tFNx+lNyE2x53nypE3tBDDNKHtTwsED6REw9X394U2eoEETZLRSLSrvhn KsaAUcdgto3qekVwDi+HtptiEg7aNWozI63Rp2VY4lbcd+Zpz0B3Kl5NIwa1HMGBoQXr /lFOXC1Ovz9TUGrtaTRzNjf1iROHONezCCFpd0dansIWUiqbNjNNbV69j9ChIZ1uzhoL /RKHE6QVkaDCR7YTZvHIjNONls/U2IxeMiz+bn0DRucTmwPzM7sGKW5Z/nQhkPs28CyP 1Bajs5S5nolUgXOfsW1ChGy1aLAHzfxfUS89xZfvNvjcJsG2kLLiISw3IGOoYLZV+GR1 ZkPQ== X-Gm-Message-State: AOAM530jXlR/oTWi3hNF3ZCgOSaW3Ubepc586Ul5QI8jAoTyzZCGHhRf vj7TPxUuY7hdYm7krBol++4aXSXVRclzmCsOt9+qvrLJyrKVOwktuOIWrHiZQiulKNOCYs5rXpg Q4FWIqaKnq5ez4SoHVMraAR1N+Df6EtLgJgLztf47AdD2x4iTbH8U8eVQYXj0HQ== X-Google-Smtp-Source: ABdhPJxTJs94/5x5UbbA8H52h77ExfCqgfBoqP8PpFvWR54dk/5gRHKpkkpoCMVDoN6qlTYhX4WYtIsUFWE= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:aa7:9592:0:b029:13e:d13d:a054 with SMTP id z18-20020aa795920000b029013ed13da054mr77352pfj.26.1601050827421; Fri, 25 Sep 2020 09:20:27 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:07 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-22-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 21/30 for 5.4] dma-direct: add missing set_memory_decrypted() for coherent mapping From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 1a2b3357e860d890f8045367b179c7e7e802cd71 commit. When a coherent mapping is created in dma_direct_alloc_pages(), it needs to be decrypted if the device requires unencrypted DMA before returning. Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers into dma-direct") Signed-off-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/direct.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index ac611b4f65b9..6c677ffdbd53 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -209,6 +209,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, __builtin_return_address(0)); if (!ret) goto out_free_pages; + if (force_dma_unencrypted(dev)) { + err = set_memory_decrypted((unsigned long)ret, + 1 << get_order(size)); + if (err) + goto out_free_pages; + } memset(ret, 0, size); goto done; } From patchwork Fri Sep 25 16:19:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EAF9C47420 for ; Fri, 25 Sep 2020 16:20:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5916D206B2 for ; Fri, 25 Sep 2020 16:20:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ijY3MWsR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729595AbgIYQUa (ORCPT ); Fri, 25 Sep 2020 12:20:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQU3 (ORCPT ); Fri, 25 Sep 2020 12:20:29 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF216C0613CE for ; Fri, 25 Sep 2020 09:20:29 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 135so2735304pfu.9 for ; Fri, 25 Sep 2020 09:20:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=5SM5TmKsAWQhROcANL8D19uf0SuS6x3YGGq5qumuyIU=; b=ijY3MWsRzvkbTwBAa50zLEfIBrHlJH9wJfUAPXOfje1yUSg4nMSDwxTPyp3stABUY9 RAxvjynCqmYjZqSJZN9B3GZdnI2gVVxPI1/vX3u3JOS8DYWjfDlBO+M08WxmZAjw+Hdj J0+KbbydtmcIk8YVMLa2JdpFedCAuUPV+76KbzpnRysRNEeKhn809wtZ0xBgEzkEp2tj H3iXxWDcCjBfffrfNMot+SoWzzUOWDRAXDT9gnXERnU9MNBH+kRRsNzyyOxP92yMYCgV brcwwfNmu4gSeGnB+T6+trjNeUx4rpf52kclQiy1ARWg45ARGGJSbW4TK02lpn861Ukg ik5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5SM5TmKsAWQhROcANL8D19uf0SuS6x3YGGq5qumuyIU=; b=WplyF0/gtzb5BehqAuIQN2EjdQhhOWsvrW6B7eof8AtMgq92v4ahHo/Uawbp2/T0q5 RYA6aXybtNOk9BEo9n77YFTvybOaDBoeH3iDhR6WSma+fBZhNu2lHeg23S+3AJ0HMyt6 0mUCBUGMV2AcRvBWpquqe367b0eWowJs0HPORJjeFE96jpVQv1XP2CU+yicYpGDFKG45 tj7wD4WgWI5d9WUuJoj5Wwz2TsjgmRdTLrn6N12Rik5lSSqAGnU8e5Jl6CMhdgGI1VNe K8dscnis52eKNNLYA/kUagT+nPqgp56FEcvWsVJM0BROEuzrC3E0ADGFhauEeF07sD/W U1Cg== X-Gm-Message-State: AOAM531aWH0iG9u4p2nsR7sTEw9+hIUxIR2S9GOEVKq9yIOR75+wFrdu apkgzKUhL39smkXdihKT3TTTe6yGJ5V3RKGpoie8p2FFMBuY32Er+G2BzHgB3cdOJSMNLI82Wcv cE5+VpUHGoA4rLxvM4ukMEdoDkbzW6j4DcHxU4ZWu64nzdmgsjn28sXR6ZdpvvQ== X-Google-Smtp-Source: ABdhPJxzx5SuUu1JW/eCW8csbc32M+KXLf5ef4fR5UaGQwUHxwFHE9h56ee+YWWGrDywXIOB83W/d/xme20= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:90b:a0a:: with SMTP id gg10mr385148pjb.20.1601050829179; Fri, 25 Sep 2020 09:20:29 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:08 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-23-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 22/30 for 5.4] dma-mapping: DMA_COHERENT_POOL should select GENERIC_ALLOCATOR From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , kernel test robot , Christoph Hellwig , David Rientjes Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream d07ae4c486908615ab336b987c7c367d132fd844 commit. The dma coherent pool code needs genalloc. Move the select over from DMA_REMAP, which doesn't actually need it. Fixes: dbed452a078d ("dma-pool: decouple DMA_REMAP from DMA_COHERENT_POOL") Reported-by: kernel test robot Signed-off-by: Christoph Hellwig Acked-by: David Rientjes Signed-off-by: Peter Gonda --- kernel/dma/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index a0ce3c1494fd..6ad16b7c9652 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -74,12 +74,12 @@ config DMA_NONCOHERENT_MMAP bool config DMA_COHERENT_POOL + select GENERIC_ALLOCATOR bool config DMA_REMAP bool depends on MMU - select GENERIC_ALLOCATOR select DMA_NONCOHERENT_MMAP config DMA_DIRECT_REMAP From patchwork Fri Sep 25 16:19:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ED9EC4727D for ; Fri, 25 Sep 2020 16:20:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 13DF42076B for ; Fri, 25 Sep 2020 16:20:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aZzuPXxa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729009AbgIYQUb (ORCPT ); Fri, 25 Sep 2020 12:20:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUb (ORCPT ); Fri, 25 Sep 2020 12:20:31 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 767D3C0613CE for ; Fri, 25 Sep 2020 09:20:31 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id c18so2711849pfi.21 for ; Fri, 25 Sep 2020 09:20:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=DjOs7Vs2zcu8GdodLlNI9Hhz23FEMcf2cbN96JZ9NVw=; b=aZzuPXxaZmW9uRRBErnEu4EXC3qjkW034qfvfDnMDLQupabGA+SSkg3L+nBh+1TMT4 SpqaKvenAsydpd7VO3PXunfIQz2Vn7tOhSFWOaXCz7eAhs9Cj/kClw7G8ywNExNok1IS j8CBBCJSoFYaTDUSGnqs03mC3sjvSdGDmXeJDM2YITC8hmvr6Rsph2/2G2e3RmBjItLZ o/zzE3zfQr1yqSuhEALWrVG+x0MmNDQkWOvLGLm62ncTiT081qFJM7pvGrUGwn8Vb0oW mnixVkYn0u/SO2hBEsDn88ussA2AK7QjHDiEU7JYyfyBAW0+MJxiP6mrvq9VawFyrhoW BR0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DjOs7Vs2zcu8GdodLlNI9Hhz23FEMcf2cbN96JZ9NVw=; b=fLjCXzPNAgPsKXzpFed3jauWK8eC5R495a1ckx7JB+8C4Kw+i9xrLGLFXRNoPBZikK t4xNldBAVQsZPAq/HytGZORGzfWGzvHfCe/SwvfgQzhO77aJmvY3uRGsllC6p/Zg1HI1 qyrv4fNDEUFlCCQ+75xZTotKnk6N8VUy6KlCm4naewZjF2RCg8QIFwT21Jp//EvhnGo6 Ph8Xppbirsj3li1EJ1pYA40JkQ1LYbH0PXMenjy0tCbj1JVRuLLt7nOmTAYIFzDLt05c VQtFpXWCjPM3flGfefqWQZt6RxggJRnf/wwhZm+SQ9COUjoSwoOC6KaWcT9fdrnaNIhs mVpw== X-Gm-Message-State: AOAM531wOfjovLyaav4mZtQ8AjJYF7G7nCZPOa4cRAqLnpwqTPhQK7tv Xj9q1k5Jrm+0IIrnL0rIQqXWPY4RTt5oMNnT/s8/NkIisKl1w6RpBA+/pFbeYd08659OxZA3Ghr 8zI5CRoBDh8GzU3JYqCh6CatEv8hxt5FRA8Gp6hkjBxQ5b5h2g3IWppoiX4aqMA== X-Google-Smtp-Source: ABdhPJzHjWFbdAHuvwAWOLk0CeSIEkZxJEvYXaLm9cQ+ftbMwMqZOq8QXcT6gECnE1j0xhGGTh9J2JGFnw4= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a62:e90b:0:b029:13e:b622:3241 with SMTP id j11-20020a62e90b0000b029013eb6223241mr85905pfh.12.1601050830913; Fri, 25 Sep 2020 09:20:30 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:09 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-24-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 23/30 for 5.4] dma-mapping: warn when coherent pool is depleted From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , David Rientjes , Guenter Roeck , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Rientjes upstream 71cdec4fab76667dabdbb2ca232b039004ebd40f commit. When a DMA coherent pool is depleted, allocation failures may or may not get reported in the kernel log depending on the allocator. The admin does have a workaround, however, by using coherent_pool= on the kernel command line. Provide some guidance on the failure and a recommended minimum size for the pools (double the size). Signed-off-by: David Rientjes Tested-by: Guenter Roeck Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 8cfa01243ed2..39ca26fa41b5 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -239,12 +239,16 @@ void *dma_alloc_from_pool(struct device *dev, size_t size, } val = gen_pool_alloc(pool, size); - if (val) { + if (likely(val)) { phys_addr_t phys = gen_pool_virt_to_phys(pool, val); *ret_page = pfn_to_page(__phys_to_pfn(phys)); ptr = (void *)val; memset(ptr, 0, size); + } else { + WARN_ONCE(1, "DMA coherent pool depleted, increase size " + "(recommended min coherent_pool=%zuK)\n", + gen_pool_size(pool) >> 9); } if (gen_pool_avail(pool) < atomic_pool_size) schedule_work(&atomic_pool_work); From patchwork Fri Sep 25 16:19:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D068C4727C for ; Fri, 25 Sep 2020 16:20:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4191420809 for ; Fri, 25 Sep 2020 16:20:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZmzdLm9B" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729596AbgIYQUd (ORCPT ); Fri, 25 Sep 2020 12:20:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUd (ORCPT ); Fri, 25 Sep 2020 12:20:33 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DB10C0613CE for ; Fri, 25 Sep 2020 09:20:33 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id a7so3073917ybq.22 for ; Fri, 25 Sep 2020 09:20:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=E6og4z6nje4AEDDQkIz9MaQsLLZ5DTy6MWJlEHz1OY0=; b=ZmzdLm9BCv+rk6VhrYFFUMs+BZPFOuayMdqW+CvKxM1AgKXRIIpt4IYIUDI7wQU0Q7 RcTHUB1im9x2OWJKP8FtwgsZNNBQDImbE7TxTjc9t/eai/j1MwLg5AxLuzDSwQ+97GHZ SXWJCivGypkpkrEMTB9l+cF/N6+0+06H/38se23P8d3FpswGjcYrH/YcmEjP53ne/YiL Rt+IoIu66wK9TA96D2O1W1/U+/Ff16qXig6L04gAMIbSqJO61YBgCWzb8W+pPQMICLB3 YqlQJkLrL8ObE0YL/8ctW/o+42zlbxC863yF/5vE7umiAn342KOq1ihopmW1R4bYgfRp ukyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E6og4z6nje4AEDDQkIz9MaQsLLZ5DTy6MWJlEHz1OY0=; b=ZNJcjy4BrkCPP25zkm4zza+0QeQl0+4DSYvvI37UVpeL/KZNK7W1TVRf+JGIgVXH7M CCC4Gh3GIY39s6QajKWOHKVbVZjCo1XISyg3NdAZH0GTTyknf8qznMc9F/9GXa/QimL4 5Bc8Y+19QjUmTrnS6aWg6GxL5OsH6TDl6g4hYp0yocmL6z8X92F/i8khOf5bNxy8b51R wAtV/sHTamIHPohQSbTLD3DMeX6YQaKYMWvLDNz8LTyKGE0u8EpzLlhxm1od3buvWpxa b4I/Xxw6mwef0znkHy7bWz9QA/9Ud1CntG1pphwm44kK7gHhjCJ7l6xbDr1EdwESS80q 86gg== X-Gm-Message-State: AOAM532ueooJHjy0MRz4M4gRPFZsP5nhocG50rhmee+QFo3mygIeoSSf FL9gRc88vBl2AVFkG3zePzSwfOqimJTBAcs5iT9ZzPmv8lWWL2TETtD793ZuPuucxVxhVMdnZFa doA7yyq3wLkZh/7jalRsr2lGVifCwKijQ9/JRka/s2HvjoDsqUULYkvpHuufYOg== X-Google-Smtp-Source: ABdhPJzUwsF5wVC931xZYF1l9F3BvihKAftpp14vAGgO3bqJR30lPc4E/6mCkNgjtN2Cr1Ne4uizjDCQxfw= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a25:3403:: with SMTP id b3mr5959417yba.455.1601050832643; Fri, 25 Sep 2020 09:20:32 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:10 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-25-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 24/30 for 5.4] dma-direct: provide function to check physical memory area validity From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Nicolas Saenz Julienne , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicolas Saenz Julienne upstream 567f6a6eba0c09e5f502e0290e57651befa8aacb commit. dma_coherent_ok() checks if a physical memory area fits a device's DMA constraints. Signed-off-by: Nicolas Saenz Julienne Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- include/linux/dma-direct.h | 1 + kernel/dma/direct.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index fb5ec847ddf3..8ccddee1f78a 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -68,6 +68,7 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) u64 dma_direct_get_required_mask(struct device *dev); gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, u64 *phys_mask); +bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size); void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 6c677ffdbd53..54c1c3a20c09 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -84,7 +84,7 @@ gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, return 0; } -static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) +bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) { return phys_to_dma_direct(dev, phys) + size - 1 <= min_not_zero(dev->coherent_dma_mask, dev->bus_dma_mask); From patchwork Fri Sep 25 16:19:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3309C47425 for ; Fri, 25 Sep 2020 16:20:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 63A6520809 for ; Fri, 25 Sep 2020 16:20:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P3erom7U" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729598AbgIYQUf (ORCPT ); Fri, 25 Sep 2020 12:20:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUe (ORCPT ); Fri, 25 Sep 2020 12:20:34 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0422C0613CE for ; Fri, 25 Sep 2020 09:20:34 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id e12so2751174pfm.0 for ; Fri, 25 Sep 2020 09:20:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=C3k20iK3GfE+b9a5aukeGgXsyuic3qNdstQ8OeobIG4=; b=P3erom7UiD1Ccn7oAS4KKrMqasIgnxsIp6pEhJuPbyMUDj1Lj0NIcQjYoXieKoS3Lu El/T6apyr2e/qBlaGrg/2dfR1eKInSDUEhJXOipE49fZ59003Utn8rwS1rxXg/xFWm6R i3wXlzU8z54FY2iaAy9Gg6dHCF6WJNU7kAyyob8xvx3Kj664A23ve8yG63OJblpO2s6X fyRUdrjg6TUOGZLRccrzD2gc7QlpVjI5wdVYZaznFpOSwLLDxxNyPM7e8EMQmqhLGc/6 XcATPEVisDiHwSl5waMl96ltrmVChDBoQJzMuYrjWU+GUdR/VYg0BZP4BGYmNXfsV7w3 1omQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C3k20iK3GfE+b9a5aukeGgXsyuic3qNdstQ8OeobIG4=; b=qTFE0nSocRWOxpU8JL2y86XTz7DOG1EbWRrntL8Ch7ubkFCg6zp9beh0ayg28YRr2u WANpNyfPfyPsbWhkO2vMZCDpEvUgtyW1i/cBM1hFNda1oxjT/AkNkklt/lJrP1kQLZGP 6JbJRbg6DfHYkn/9/x14xSWXuFTA70qtda1xrKMrTfB3HAK6kZTi6euoa9FViyQw+vQS yqEpuXlurC5sF8Ma6iG9SynmCICO3HHOu9PXpwk/GSi6kqDKQGcwnChTlR1VnTMa8W3R PsB+N8HQ8iDgSA49O0Ej63fTIVSWE1XlSL+uBxuGPmxKuRNk/5GxNE1qJKs6IyPkaPHP l/aA== X-Gm-Message-State: AOAM531q5ei5mcH3a0WWq+p9tBCLt7k9ay9XMHQfyRkVDhf3v7eQOaTT svYkie8JtyYenbYgbF9HUorqN1hPUXpE1ia8+3O4Rli1ERLfrKqI0BUzGHh+1CAdDv+tj+encB5 /St5MZ8o0LacTMPS1Gqy48tpgF55qbu9LOVyqtLn5fqcIYA80MiQL5tetW9Joww== X-Google-Smtp-Source: ABdhPJwiC5xptl3dyMpabG5Vn52g2nq3uUCGCuMWDompsb2hWxW6jZ0nGRwkDHWpZvJXTV6/qrVwg9zARIc= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:aa7:8a46:0:b029:142:2501:398a with SMTP id n6-20020aa78a460000b02901422501398amr9751pfa.79.1601050834187; Fri, 25 Sep 2020 09:20:34 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:11 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-26-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 25/30 for 5.4] dma-pool: get rid of dma_in_atomic_pool() From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Nicolas Saenz Julienne , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicolas Saenz Julienne upstream 23e469be6239d9cf3d921fc3e38545491df56534 commit. The function is only used once and can be simplified to a one-liner. Signed-off-by: Nicolas Saenz Julienne Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 39ca26fa41b5..318035e093fb 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -217,15 +217,6 @@ static inline struct gen_pool *dev_to_pool(struct device *dev) return atomic_pool_kernel; } -static bool dma_in_atomic_pool(struct device *dev, void *start, size_t size) -{ - struct gen_pool *pool = dev_to_pool(dev); - - if (unlikely(!pool)) - return false; - return gen_pool_has_addr(pool, (unsigned long)start, size); -} - void *dma_alloc_from_pool(struct device *dev, size_t size, struct page **ret_page, gfp_t flags) { @@ -260,7 +251,7 @@ bool dma_free_from_pool(struct device *dev, void *start, size_t size) { struct gen_pool *pool = dev_to_pool(dev); - if (!dma_in_atomic_pool(dev, start, size)) + if (!pool || !gen_pool_has_addr(pool, (unsigned long)start, size)) return false; gen_pool_free(pool, (unsigned long)start, size); return true; From patchwork Fri Sep 25 16:19:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6058FC4741F for ; Fri, 25 Sep 2020 16:20:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2238820809 for ; Fri, 25 Sep 2020 16:20:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QBeZ9u/7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729600AbgIYQUg (ORCPT ); Fri, 25 Sep 2020 12:20:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUg (ORCPT ); Fri, 25 Sep 2020 12:20:36 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E05FC0613CE for ; Fri, 25 Sep 2020 09:20:36 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id e28so2671490pgm.15 for ; Fri, 25 Sep 2020 09:20:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=8WmVpMJEQT1pHI2cMLK8QhgtDuHMgVuFfXN4ECzM5os=; b=QBeZ9u/7JM7azMzEpXE2Yr4MVKnwbIniM88pEPVs1cp42Kl2UBbqwtNsv0uc4dIzWB +L5eHzmzzm2enx5ahzRvc9tJ+A7yLtNLxUBYcVqHF4nYMKYl9qf0vYUBOiZAR3+jA9Tv p3jz1/FI/4jzEGbD43N6RSpFPLfDkj/PDfGeyaWZicbRkpaLvSoxZaVjdcbW2AEjHh2S Z0FIlXlM82nNt4ofPiCMpLfpQ9iKAy78jXTZbZUWFxsghLXAf2QmNjnB7BW1iSCvpav/ tbYv0XIGw/7CfB8Tl1G3j9KbHXmWGHzTgb2WIfHtTa80UAywtiOOyMp6Z/QaRLsRv/pa O7iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8WmVpMJEQT1pHI2cMLK8QhgtDuHMgVuFfXN4ECzM5os=; b=dd8bNY7ZQjQQQ23z23W98eHnFhOSR+nC9dE8Lp9V7icnvUC+uZvz2LrgY0/41RjGzz 8Po3qxwXhCA8HHsF82n5ZMgTdEA64tKoNie+HFyEBApUGm7CRTWHBlTP1BiJdVRLS4b/ gNPuZBACtpFiDjiDv29C7342eY+ObHBak5OASBW2qMx4/hg6WbA5EPuFObDfNmXhAbTx R7k08hbABXFALNUL/8Ll3fZyRWH6kFGuQbyJP/dYre+zwWkxKwJPswKT4qj8cCMZJO81 PLEd28DhG++7afqTC+nXIP06T8VzDYA7aDZghjBs8WzoK93GU9ONkQt0v30RHbfIlLjv 4ROw== X-Gm-Message-State: AOAM532+K1bmCSLM37QdsA9o/5po5Q2y222jRQt4NRN765mtiG+8qzKF 49qD/fIV1T+hC1JNGqxQamJi9+7lQchCauUXeZcRe/WyrnRR+DSihbEHp8w7L8XJ+baj8JdGpOw 4i0vpGcKtjjOuUIQuJa2Bk+R0PQUaZQe0uvWIq7x/Gg8DLAVJN9uTSJs7b45dWw== X-Google-Smtp-Source: ABdhPJy8PbGKlaz/oHyL9ckDhw9QjZ4GQwh3Sl2JpH/7mZG4QOUYJB9D2j4Bjz15FCuogDDWb1AdUhLPJFI= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:90b:408b:: with SMTP id jb11mr379843pjb.164.1601050835964; Fri, 25 Sep 2020 09:20:35 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:12 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-27-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 26/30 for 5.4] dma-pool: introduce dma_guess_pool() From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Nicolas Saenz Julienne , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicolas Saenz Julienne upstream 48b6703858dd5526c82d8ff2dbac59acab3a9dda commit. dma-pool's dev_to_pool() creates the false impression that there is a way to grantee a mapping between a device's DMA constraints and an atomic pool. It tuns out it's just a guess, and the device might need to use an atomic pool containing memory from a 'safer' (or lower) memory zone. To help mitigate this, introduce dma_guess_pool() which can be fed a device's DMA constraints and atomic pools already known to be faulty, in order for it to provide an better guess on which pool to use. Signed-off-by: Nicolas Saenz Julienne Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 318035e093fb..5b9eaa2b498d 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -203,7 +203,7 @@ static int __init dma_atomic_pool_init(void) } postcore_initcall(dma_atomic_pool_init); -static inline struct gen_pool *dev_to_pool(struct device *dev) +static inline struct gen_pool *dma_guess_pool_from_device(struct device *dev) { u64 phys_mask; gfp_t gfp; @@ -217,10 +217,30 @@ static inline struct gen_pool *dev_to_pool(struct device *dev) return atomic_pool_kernel; } +static inline struct gen_pool *dma_get_safer_pool(struct gen_pool *bad_pool) +{ + if (bad_pool == atomic_pool_kernel) + return atomic_pool_dma32 ? : atomic_pool_dma; + + if (bad_pool == atomic_pool_dma32) + return atomic_pool_dma; + + return NULL; +} + +static inline struct gen_pool *dma_guess_pool(struct device *dev, + struct gen_pool *bad_pool) +{ + if (bad_pool) + return dma_get_safer_pool(bad_pool); + + return dma_guess_pool_from_device(dev); +} + void *dma_alloc_from_pool(struct device *dev, size_t size, struct page **ret_page, gfp_t flags) { - struct gen_pool *pool = dev_to_pool(dev); + struct gen_pool *pool = dma_guess_pool(dev, NULL); unsigned long val; void *ptr = NULL; @@ -249,7 +269,7 @@ void *dma_alloc_from_pool(struct device *dev, size_t size, bool dma_free_from_pool(struct device *dev, void *start, size_t size) { - struct gen_pool *pool = dev_to_pool(dev); + struct gen_pool *pool = dma_guess_pool(dev, NULL); if (!pool || !gen_pool_has_addr(pool, (unsigned long)start, size)) return false; From patchwork Fri Sep 25 16:19:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB8D9C4727D for ; Fri, 25 Sep 2020 16:20:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7B1532086A for ; Fri, 25 Sep 2020 16:20:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D6LsD0wX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729611AbgIYQUj (ORCPT ); Fri, 25 Sep 2020 12:20:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUi (ORCPT ); Fri, 25 Sep 2020 12:20:38 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5F42C0613CE for ; Fri, 25 Sep 2020 09:20:38 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id x191so2326157qkb.3 for ; Fri, 25 Sep 2020 09:20:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=QGrTf3r0PMi2PqL3YGN6cH3ZNN+zA8nvEnO7ohBFfOE=; b=D6LsD0wXBAz1bkgAIMbMCR2M1876YiLlbRyl6jhcZT/nFhKXEFVX349o6BH7Up3RxE 8o8gq0xUoEdsUlxM1OMsVDryqh7rFG5/ocVzrOt1h6jaTmpBVjbX7xh9Xja/t4ILkrI3 c+Z3FCWDGQkBBL+Eo6N0G+llgdTHaM1QXoh4yQSiue1rHCXyuhWTzsEXN1F/dU0AfcJI VmvedGa8608MVbgpHuJa500VGJTbHgvg0cUG46Dpmx3WhX1t+XPMFAz912XDJtxJfJdI RP8zTXot7QeyLHNrUR+m+7uUOxQw9cP6bmF6qzh2A8hYf7Q/BEGi78z+l1K2gZ5l7rfY 9VwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QGrTf3r0PMi2PqL3YGN6cH3ZNN+zA8nvEnO7ohBFfOE=; b=V0DPyjHFXhABjAFVFHkH3Bw3qX4+ndeeb30Q9edEjSq8M6kib2LwcMKszl99jcP79Q +UWIfjpt18z4w1fXHcM3QYSOzXBxXhBf4dFkTKZkAQgufINwShGMM3abnYPQ0Wt/gHpI PJUul7TQqZACnzyCYdWxKLa3PzDY8Sc5xFqshebgQvJ0xYMGw4GCS3MjqeZnk4uGUEu9 vdC3zEOrVtxnUz0DvkmzHu59Yxe40co0xHbpFv78IPZx/cFHU+8FMW4LJeanYduKh83s FL/1Bmsb9PrWJBE+WzshhSYpBFWDQQh/5pM6+NTbEnTYoFetR6bnMEIi4f9fRkrPpYYn w3Nw== X-Gm-Message-State: AOAM530g2nrWPVv/TCDI7mvVhDSxe1l45zOnJuRgXqdKiCr8nHT89PHD HORyr+cwdZtDQeYYryXnJe6j+SbtLyftiWOtKvdFMaHDx3HWYTEokDdmz4wcZ7nLxKEqeDvcwAH kDED3b8MerRDnfDrls/2kYqJAW7oNpGABpi0e1bxMtRiWCo1s5qa/fZ7BbeWFnQ== X-Google-Smtp-Source: ABdhPJzYn4jXNM+hltx74NrVe4/4wZg6C8NXZ27vZGAeljYsYaRh3AjiBwFrR+MLFRJ0SBu7b4/q3Vm2U9A= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:ad4:4891:: with SMTP id bv17mr61751qvb.27.1601050837620; Fri, 25 Sep 2020 09:20:37 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:13 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-28-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 27/30 for 5.4] dma-pool: make sure atomic pool suits device From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Jeremy Linton , Robin Murphy , Nicolas Saenz Julienne , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicolas Saenz Julienne upstream 81e9d894e03f9a279102c7aac62ea7cbf9949f4b commit. When allocating DMA memory from a pool, the core can only guess which atomic pool will fit a device's constraints. If it doesn't, get a safer atomic pool and try again. Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask") Reported-by: Jeremy Linton Suggested-by: Robin Murphy Signed-off-by: Nicolas Saenz Julienne Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 57 ++++++++++++++++++++++++++++++----------------- 1 file changed, 37 insertions(+), 20 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 5b9eaa2b498d..d48d9acb585f 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -240,39 +240,56 @@ static inline struct gen_pool *dma_guess_pool(struct device *dev, void *dma_alloc_from_pool(struct device *dev, size_t size, struct page **ret_page, gfp_t flags) { - struct gen_pool *pool = dma_guess_pool(dev, NULL); - unsigned long val; + struct gen_pool *pool = NULL; + unsigned long val = 0; void *ptr = NULL; - - if (!pool) { - WARN(1, "%pGg atomic pool not initialised!\n", &flags); - return NULL; + phys_addr_t phys; + + while (1) { + pool = dma_guess_pool(dev, pool); + if (!pool) { + WARN(1, "Failed to get suitable pool for %s\n", + dev_name(dev)); + break; + } + + val = gen_pool_alloc(pool, size); + if (!val) + continue; + + phys = gen_pool_virt_to_phys(pool, val); + if (dma_coherent_ok(dev, phys, size)) + break; + + gen_pool_free(pool, val, size); + val = 0; } - val = gen_pool_alloc(pool, size); - if (likely(val)) { - phys_addr_t phys = gen_pool_virt_to_phys(pool, val); + if (val) { *ret_page = pfn_to_page(__phys_to_pfn(phys)); ptr = (void *)val; memset(ptr, 0, size); - } else { - WARN_ONCE(1, "DMA coherent pool depleted, increase size " - "(recommended min coherent_pool=%zuK)\n", - gen_pool_size(pool) >> 9); + + if (gen_pool_avail(pool) < atomic_pool_size) + schedule_work(&atomic_pool_work); } - if (gen_pool_avail(pool) < atomic_pool_size) - schedule_work(&atomic_pool_work); return ptr; } bool dma_free_from_pool(struct device *dev, void *start, size_t size) { - struct gen_pool *pool = dma_guess_pool(dev, NULL); + struct gen_pool *pool = NULL; - if (!pool || !gen_pool_has_addr(pool, (unsigned long)start, size)) - return false; - gen_pool_free(pool, (unsigned long)start, size); - return true; + while (1) { + pool = dma_guess_pool(dev, pool); + if (!pool) + return false; + + if (gen_pool_has_addr(pool, (unsigned long)start, size)) { + gen_pool_free(pool, (unsigned long)start, size); + return true; + } + } } From patchwork Fri Sep 25 16:19:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A53EAC4727C for ; Fri, 25 Sep 2020 16:20:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F90F20809 for ; Fri, 25 Sep 2020 16:20:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Hrkxa998" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729612AbgIYQUk (ORCPT ); Fri, 25 Sep 2020 12:20:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgIYQUk (ORCPT ); Fri, 25 Sep 2020 12:20:40 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0906BC0613CE for ; Fri, 25 Sep 2020 09:20:40 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id e4so2339359pjd.4 for ; Fri, 25 Sep 2020 09:20:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=KU8huPEhL7T7UkG35//hpouPqo6XDp+tXBvhHSjepB8=; b=Hrkxa998Q2FnoRq+WycNNFC7a/B97QHFToQfZttXbcYdVDjTB+Gbwbatbx/3rgpPpt BUpn3xL3kR0t8zwDeBcxXOyp9P9fP//KgDOzfLilZEtWpH6b86njKMTzVTuiI88K7ay0 onvcLsoPKfUSbgwWF9anqT+EVPP8yjOrywdCmsU5kNwtMum+nZlBTM9yrKHuGfYGiERx MQK2uquHHQATsMSBIhUqKZVeBDZR+llspzwl7k5WYXjPoGk/zSBmalSDw3t8dC/u6s6w +Gk4fb2pJUK0wr/XEupQZCwkCT0W01EZ1tZKVCTkSV8vJ1ZaNgZlm2L4RNPK7bUxnQdQ DqTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KU8huPEhL7T7UkG35//hpouPqo6XDp+tXBvhHSjepB8=; b=GdxF58K78FNTy3zOdGIk0pWeeqcnemoq7rOx9yMmZAiyIKRIZ71QwR5EhUb6c/jtVc yIIdYEv78tuVWu8Wk2AsJdShgTyJyJgH7qIM4wLLNsyXPPhqE6DEhdAEcKOmfEWNciqO lhljrYokUAo5ZG1smUnYR4/oNOwxqvdoXiTqEi7mD06+l3/5bHgCBeyjmdeaEraqxNOR 77L56wlufHi1vlBjJ3TgNk9urJBdiVM3i1n2AIVu2UTwbyLlRWi0aAveeHqOS0io0ywG 3nmgxiw4GG/Pc4WF/vcUIvLlVTodsnpRaIp9opkagNQIe98BZlGs/KCx1Y8DmuqlR3xg fZkg== X-Gm-Message-State: AOAM533llanSqQ0thgexKjFLmesNcJlJMZo3Bq5sbM4XU7deNzjQd63x KlIjONj9DRzqjFBom3z47E0XVa0pfWG7mcks1EwqPWC3po37sIGYdX52RE+bIqI46g1P+AeKHUi cXmbDjEBkb2PFDmOdQRizW53hsnZgHS/gn/ZPLCUojWDdw5qkYz+ntgL0ATIa5g== X-Google-Smtp-Source: ABdhPJzQpVORyOny5JyJHrLQZF+85/6ZcAD61/lKKwyTunsYamu5beZ/Lmu84qthG8zgWx8GCIpF98pCe74= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a17:902:8493:b029:d2:42a6:238 with SMTP id c19-20020a1709028493b02900d242a60238mr199084plo.4.1601050839356; Fri, 25 Sep 2020 09:20:39 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:14 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-29-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 28/30 for 5.4] dma-pool: do not allocate pool memory from CMA From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Jeremy Linton , Nicolas Saenz Julienne , David Rientjes , Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicolas Saenz Julienne upstream d9765e41d8e9ea2251bf73735a2895c8bad546fc commit. There is no guarantee to CMA's placement, so allocating a zone specific atomic pool from CMA might return memory from a completely different memory zone. So stop using it. Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask") Reported-by: Jeremy Linton Signed-off-by: Nicolas Saenz Julienne Tested-by: Jeremy Linton Acked-by: David Rientjes Signed-off-by: Christoph Hellwig Signed-off-by: Peter Gonda --- kernel/dma/pool.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index d48d9acb585f..6bc74a2d5127 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -6,7 +6,6 @@ #include #include #include -#include #include #include #include @@ -69,12 +68,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, do { pool_size = 1 << (PAGE_SHIFT + order); - - if (dev_get_cma_area(NULL)) - page = dma_alloc_from_contiguous(NULL, 1 << order, - order, false); - else - page = alloc_pages(gfp, order); + page = alloc_pages(gfp, order); } while (!page && order-- > 0); if (!page) goto out; @@ -118,8 +112,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, dma_common_free_remap(addr, pool_size); #endif free_page: __maybe_unused - if (!dma_release_from_contiguous(NULL, page, 1 << order)) - __free_pages(page, order); + __free_pages(page, order); out: return ret; } From patchwork Fri Sep 25 16:19:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 309180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10991C47423 for ; Fri, 25 Sep 2020 16:20:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD11A20809 for ; Fri, 25 Sep 2020 16:20:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IwKMVy/B" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729613AbgIYQUm (ORCPT ); Fri, 25 Sep 2020 12:20:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729429AbgIYQUm (ORCPT ); Fri, 25 Sep 2020 12:20:42 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4201C0613CE for ; Fri, 25 Sep 2020 09:20:41 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id r128so2315241qkc.9 for ; Fri, 25 Sep 2020 09:20:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=AUpfkt2xojASMSGKeOD+2WcTVbK452VwDmDVXXefo+U=; b=IwKMVy/BGjREy+F3yRAk3UIyrS8muqOxdhWNx2GVP2lE2BZfE0Q/EDh7f3rACttR2m 4FVllaQIx0IKwTgtm719jOp2lI9Fhdw2HJtJ1lf6Kz9tz7Wl49lvAtKmQZ9z529tXxS8 +yyLNbKVSnPItmbrX/ncfoMA1Ud4cDuOh77TnmScrvAOgOX2Rle/XIZMr/u9+4NQfTUM y0e8JjvI8/AtiIa19O2aeS30uRGJpGHUU963tTtqh2GKehFiaklb2+kFY7lUfYoPEmR/ nmaoOmruQjFEK8XycGzsuUeLxIok4CiGZ5V3R8HMXchzT25Nk/sTmDXHpMzh27GUSEfK HWsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AUpfkt2xojASMSGKeOD+2WcTVbK452VwDmDVXXefo+U=; b=rmSE41LwVsCVNTD87+wQqQX3HnyYf3lp3NOm55w5OmQRiwG4Z7Bd+X6AMxkCvhEHGY 1LgY5NHU1t7TxJm12cEZh/bqScyL8xjIMwNH5l9i9xpDZYDEFwdCU7cN1fAmMGHTZchG OodOtrV2SmImNIfTDqXq5PAljINIpC/Po2P34gldt22xhmtIHUp+J0IRGxxIwbFSIguA uAZGQ3teOEY126QOdwHzYSkB0+EtFGmDwpjmrc8ttPnJi9uEapWqf2uQukD24na5yCUn uQmgkuornacGMU9HHfdUisJcDbJ6Q88Bfw+AvO+seGVdzJYSLK45k5DNElsr+RXfdU6V Lp1A== X-Gm-Message-State: AOAM532XOE9ofcEgWMU3DY7eEt1NkLfP99R5VrP9JYFPD5vxvkB6k60Z zZwYq9KguG+fe1W85jD+wfL1JY0dLe80ZCG2e5J3rHBMOwTa6vXta4VTo5Gw154sRwAQK0ky2RB RgQVLDUeIX9iV8eWiennazvqPbJNA1iTz7h1qI8KdmCbMaVQij7jjb8lH34x8Aw== X-Google-Smtp-Source: ABdhPJz1zmGA0xDHdI4rLCTY3152xAMoNPwZiz60hcSACf7n+3rXEnoshGcBvDgSARTU4ylT8/E9c8mDF6o= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a0c:8f02:: with SMTP id z2mr166804qvd.21.1601050840933; Fri, 25 Sep 2020 09:20:40 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:15 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-30-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 29/30 for 5.4] dma-pool: fix coherent pool allocations for IOMMU mappings From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Amit Pundir Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig upstream 9420139f516d7fbc248ce17f35275cb005ed98ea commit. When allocating coherent pool memory for an IOMMU mapping we don't care about the DMA mask. Move the guess for the initial GFP mask into the dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer argument so that it doesn't get applied to the IOMMU case. Signed-off-by: Christoph Hellwig Tested-by: Amit Pundir Change-Id: I343ae38a73135948f8f8bb9ae9a12034c7d4c405 Signed-off-by: Peter Gonda --- drivers/iommu/dma-iommu.c | 4 +- include/linux/dma-direct.h | 3 - include/linux/dma-mapping.h | 5 +- kernel/dma/direct.c | 11 +++- kernel/dma/pool.c | 114 +++++++++++++++--------------------- 5 files changed, 61 insertions(+), 76 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index b642c1123a29..f917bd10f47c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1010,8 +1010,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !gfpflags_allow_blocking(gfp) && !coherent) - cpu_addr = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, - gfp); + page = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &cpu_addr, + gfp, NULL); else cpu_addr = iommu_dma_alloc_pages(dev, size, &page, gfp, attrs); if (!cpu_addr) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 8ccddee1f78a..6db863c3eb93 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -66,9 +66,6 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) } u64 dma_direct_get_required_mask(struct device *dev); -gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, - u64 *phys_mask); -bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size); void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index e4be706d8f5e..246a4b429612 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -633,8 +633,9 @@ void *dma_common_pages_remap(struct page **pages, size_t size, pgprot_t prot, const void *caller); void dma_common_free_remap(void *cpu_addr, size_t size); -void *dma_alloc_from_pool(struct device *dev, size_t size, - struct page **ret_page, gfp_t flags); +struct page *dma_alloc_from_pool(struct device *dev, size_t size, + void **cpu_addr, gfp_t flags, + bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)); bool dma_free_from_pool(struct device *dev, void *start, size_t size); int diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 54c1c3a20c09..71be82b07743 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -84,7 +84,7 @@ gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, return 0; } -bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) +static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) { return phys_to_dma_direct(dev, phys) + size - 1 <= min_not_zero(dev->coherent_dma_mask, dev->bus_dma_mask); @@ -177,8 +177,13 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, size = PAGE_ALIGN(size); if (dma_should_alloc_from_pool(dev, gfp, attrs)) { - ret = dma_alloc_from_pool(dev, size, &page, gfp); - if (!ret) + u64 phys_mask; + + gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, + &phys_mask); + page = dma_alloc_from_pool(dev, size, &ret, gfp, + dma_coherent_ok); + if (!page) return NULL; goto done; } diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 6bc74a2d5127..5d071d4a3cba 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -196,93 +196,75 @@ static int __init dma_atomic_pool_init(void) } postcore_initcall(dma_atomic_pool_init); -static inline struct gen_pool *dma_guess_pool_from_device(struct device *dev) +static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp) { - u64 phys_mask; - gfp_t gfp; - - gfp = dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, - &phys_mask); - if (IS_ENABLED(CONFIG_ZONE_DMA) && gfp == GFP_DMA) + if (prev == NULL) { + if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32)) + return atomic_pool_dma32; + if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA)) + return atomic_pool_dma; + return atomic_pool_kernel; + } + if (prev == atomic_pool_kernel) + return atomic_pool_dma32 ? atomic_pool_dma32 : atomic_pool_dma; + if (prev == atomic_pool_dma32) return atomic_pool_dma; - if (IS_ENABLED(CONFIG_ZONE_DMA32) && gfp == GFP_DMA32) - return atomic_pool_dma32; - return atomic_pool_kernel; + return NULL; } -static inline struct gen_pool *dma_get_safer_pool(struct gen_pool *bad_pool) +static struct page *__dma_alloc_from_pool(struct device *dev, size_t size, + struct gen_pool *pool, void **cpu_addr, + bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)) { - if (bad_pool == atomic_pool_kernel) - return atomic_pool_dma32 ? : atomic_pool_dma; + unsigned long addr; + phys_addr_t phys; - if (bad_pool == atomic_pool_dma32) - return atomic_pool_dma; + addr = gen_pool_alloc(pool, size); + if (!addr) + return NULL; - return NULL; -} + phys = gen_pool_virt_to_phys(pool, addr); + if (phys_addr_ok && !phys_addr_ok(dev, phys, size)) { + gen_pool_free(pool, addr, size); + return NULL; + } -static inline struct gen_pool *dma_guess_pool(struct device *dev, - struct gen_pool *bad_pool) -{ - if (bad_pool) - return dma_get_safer_pool(bad_pool); + if (gen_pool_avail(pool) < atomic_pool_size) + schedule_work(&atomic_pool_work); - return dma_guess_pool_from_device(dev); + *cpu_addr = (void *)addr; + memset(*cpu_addr, 0, size); + return pfn_to_page(__phys_to_pfn(phys)); } -void *dma_alloc_from_pool(struct device *dev, size_t size, - struct page **ret_page, gfp_t flags) +struct page *dma_alloc_from_pool(struct device *dev, size_t size, + void **cpu_addr, gfp_t gfp, + bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)) { struct gen_pool *pool = NULL; - unsigned long val = 0; - void *ptr = NULL; - phys_addr_t phys; - - while (1) { - pool = dma_guess_pool(dev, pool); - if (!pool) { - WARN(1, "Failed to get suitable pool for %s\n", - dev_name(dev)); - break; - } - - val = gen_pool_alloc(pool, size); - if (!val) - continue; - - phys = gen_pool_virt_to_phys(pool, val); - if (dma_coherent_ok(dev, phys, size)) - break; - - gen_pool_free(pool, val, size); - val = 0; - } - - - if (val) { - *ret_page = pfn_to_page(__phys_to_pfn(phys)); - ptr = (void *)val; - memset(ptr, 0, size); + struct page *page; - if (gen_pool_avail(pool) < atomic_pool_size) - schedule_work(&atomic_pool_work); + while ((pool = dma_guess_pool(pool, gfp))) { + page = __dma_alloc_from_pool(dev, size, pool, cpu_addr, + phys_addr_ok); + if (page) + return page; } - return ptr; + WARN(1, "Failed to get suitable pool for %s\n", dev_name(dev)); + return NULL; } bool dma_free_from_pool(struct device *dev, void *start, size_t size) { struct gen_pool *pool = NULL; - while (1) { - pool = dma_guess_pool(dev, pool); - if (!pool) - return false; - - if (gen_pool_has_addr(pool, (unsigned long)start, size)) { - gen_pool_free(pool, (unsigned long)start, size); - return true; - } + while ((pool = dma_guess_pool(pool, 0))) { + if (!gen_pool_has_addr(pool, (unsigned long)start, size)) + continue; + gen_pool_free(pool, (unsigned long)start, size); + return true; } + + return false; } From patchwork Fri Sep 25 16:19:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 263438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E50ECC4741F for ; Fri, 25 Sep 2020 16:20:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D00720809 for ; Fri, 25 Sep 2020 16:20:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lFSqrKjb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729614AbgIYQUo (ORCPT ); Fri, 25 Sep 2020 12:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729429AbgIYQUo (ORCPT ); Fri, 25 Sep 2020 12:20:44 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF93FC0613CE for ; Fri, 25 Sep 2020 09:20:43 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id e190so3102402ybf.18 for ; Fri, 25 Sep 2020 09:20:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=w5IfzMwmfSTccrzN4OS/3MjEGRPmdpKn9ghsuNYWS/0=; b=lFSqrKjbI/BO3hvEsGVshNiGQXtTtvEklFe0U5faNm48RVtuHce6xpnCZeSiR6yNUq uaA4KLExcj15/6IrdTn13wxWppakg84uIc8tZZqAffCUzAekthFiJ+9/KnwHqjXsbRnA idj5UCoUIesxi5OLSYBmjW7VWNdGoWFwiQ3EKBjiyYtw2TMgDXTnQB7+Sw1DlBVkGnH+ 7s3ET9vePn08i7lp2eCVDmYGtkHRP+ctr7PPFX+Z5zyjhu6v7/8XQNE8ao31HXPIEb+q ltj7koPGsTk4erk8eVR/g7RJ+5URhe9k6V0Tocp0clzboEASvJtvBR40W3KRJt0xI3h/ OVRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=w5IfzMwmfSTccrzN4OS/3MjEGRPmdpKn9ghsuNYWS/0=; b=dksw9eMB0Ogc3iR1kYf4rvoiqZ5KDAdKy00l782eCt9vRgeysWR7IEoJhw+x9ammEU jtIWpKB+NCSRsf8CeMLtouN20q/GV87BJ2oNgJUUGKluByR+r2vWVE5dxcrJ54Nb/LkB mlMxF8otuEY4hCSMAEvS5Ez3QEOehOBgQBXfZc0SDrXTUz1ekf54ZIAi+45H+eBQXMry IyTIdTv6xD7lhkdAOYIsC+MCTnJ/C+Y9vmaM+l3Z16oCj87VK550zfTWMw1mJ8M0bTma tyungZXJqwWYOBt6VbvUONaFPSnHHIAiQewiiubUP+nE0VMgWg4witFDSdMQwXXAA8AO sKig== X-Gm-Message-State: AOAM533tfosItFPhJnCejm9hpch0bRyYd64J0ht/5fiVq5U4YBytvB2l Ov4SS2gFiqh9m32wVdJMTX8C3/O6lW7y7Fhtqwy1DcL7wcTn2PK4jh+Euk+8PDH59oASqZfB1KC PZrTWcaJR8Cn2rU76E9WSWiQqzw3oCYkf85eD+z6dd8l1BxYamIq2byXy+mIQMw== X-Google-Smtp-Source: ABdhPJyPsjxUNhWpG/tKD5vhQWx1DWnpABR/kdvO2jMWZg5aJlXgp6G9UmeuLdRlQkgKJCPmo1buxLSDKDc= Sender: "pgonda via sendgmr" X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:1101:f693:9fff:fef4:e3a2]) (user=pgonda job=sendgmr) by 2002:a25:ca17:: with SMTP id a23mr6734956ybg.176.1601050842818; Fri, 25 Sep 2020 09:20:42 -0700 (PDT) Date: Fri, 25 Sep 2020 09:19:16 -0700 In-Reply-To: <20200925161916.204667-1-pgonda@google.com> Message-Id: <20200925161916.204667-31-pgonda@google.com> Mime-Version: 1.0 References: <20200925161916.204667-1-pgonda@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 30/30 for 5.4] dma/direct: turn ARCH_ZONE_DMA_BITS into a variable From: Peter Gonda To: stable@vger.kernel.org Cc: Peter Gonda , Christoph Hellwig , Nicolas Saenz Julienne , Catalin Marinas Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicolas Saenz Julienne upstream 8b5369ea580964dbc982781bfb9fb93459fc5e8d commit. Some architectures, notably ARM, are interested in tweaking this depending on their runtime DMA addressing limitations. Acked-by: Christoph Hellwig Signed-off-by: Nicolas Saenz Julienne Signed-off-by: Catalin Marinas Change-Id: I890f2bfbbf5758e3868acddd7bba6f655ec2b357 Signed-off-by: Peter Gonda --- arch/arm64/mm/init.c | 9 ++++++++- arch/powerpc/include/asm/page.h | 9 --------- arch/powerpc/mm/mem.c | 20 +++++++++++++++----- arch/s390/include/asm/page.h | 2 -- arch/s390/mm/init.c | 1 + include/linux/dma-direct.h | 2 ++ kernel/dma/direct.c | 13 ++++++------- 7 files changed, 32 insertions(+), 24 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 45c00a54909c..214cedc9271c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -41,6 +42,8 @@ #include #include +#define ARM64_ZONE_DMA_BITS 30 + /* * We need to be able to catch inadvertent references to memstart_addr * that occur (potentially in generic code) before arm64_memblock_init() @@ -418,7 +421,11 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - /* 4GB maximum for 32-bit only capable devices */ + if (IS_ENABLED(CONFIG_ZONE_DMA)) { + zone_dma_bits = ARM64_ZONE_DMA_BITS; + arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS); + } + if (IS_ENABLED(CONFIG_ZONE_DMA32)) arm64_dma_phys_limit = max_zone_dma_phys(); else diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index 6ba5adb96a3b..d568ce08e3b2 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -334,13 +334,4 @@ struct vm_area_struct; #endif /* __ASSEMBLY__ */ #include -/* - * Allow 30-bit DMA for very limited Broadcom wifi chips on many powerbooks. - */ -#ifdef CONFIG_PPC32 -#define ARCH_ZONE_DMA_BITS 30 -#else -#define ARCH_ZONE_DMA_BITS 31 -#endif - #endif /* _ASM_POWERPC_PAGE_H */ diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 96ca90ce0264..3b99b6b67fb5 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include @@ -223,10 +224,10 @@ static int __init mark_nonram_nosave(void) * everything else. GFP_DMA32 page allocations automatically fall back to * ZONE_DMA. * - * By using 31-bit unconditionally, we can exploit ARCH_ZONE_DMA_BITS to - * inform the generic DMA mapping code. 32-bit only devices (if not handled - * by an IOMMU anyway) will take a first dip into ZONE_NORMAL and get - * otherwise served by ZONE_DMA. + * By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the + * generic DMA mapping code. 32-bit only devices (if not handled by an IOMMU + * anyway) will take a first dip into ZONE_NORMAL and get otherwise served by + * ZONE_DMA. */ static unsigned long max_zone_pfns[MAX_NR_ZONES]; @@ -259,9 +260,18 @@ void __init paging_init(void) printk(KERN_DEBUG "Memory hole size: %ldMB\n", (long int)((top_of_ram - total_ram) >> 20)); + /* + * Allow 30-bit DMA for very limited Broadcom wifi chips on many + * powerbooks. + */ + if (IS_ENABLED(CONFIG_PPC32)) + zone_dma_bits = 30; + else + zone_dma_bits = 31; + #ifdef CONFIG_ZONE_DMA max_zone_pfns[ZONE_DMA] = min(max_low_pfn, - 1UL << (ARCH_ZONE_DMA_BITS - PAGE_SHIFT)); + 1UL << (zone_dma_bits - PAGE_SHIFT)); #endif max_zone_pfns[ZONE_NORMAL] = max_low_pfn; #ifdef CONFIG_HIGHMEM diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index e399102367af..1019efd85b9d 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -179,8 +179,6 @@ static inline int devmem_is_allowed(unsigned long pfn) #define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define ARCH_ZONE_DMA_BITS 31 - #include #include diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index c1d96e588152..ac44bd76db4b 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -118,6 +118,7 @@ void __init paging_init(void) sparse_memory_present_with_active_regions(MAX_NUMNODES); sparse_init(); + zone_dma_bits = 31; memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS); max_zone_pfns[ZONE_NORMAL] = max_low_pfn; diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 6db863c3eb93..f3b276242f2d 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -6,6 +6,8 @@ #include /* for min_low_pfn */ #include +extern unsigned int zone_dma_bits; + static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr); #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 71be82b07743..2af418b4b6f9 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -17,12 +17,11 @@ #include /* - * Most architectures use ZONE_DMA for the first 16 Megabytes, but - * some use it for entirely different regions: + * Most architectures use ZONE_DMA for the first 16 Megabytes, but some use it + * it for entirely different regions. In that case the arch code needs to + * override the variable below for dma-direct to work properly. */ -#ifndef ARCH_ZONE_DMA_BITS -#define ARCH_ZONE_DMA_BITS 24 -#endif +unsigned int zone_dma_bits __ro_after_init = 24; static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size) { @@ -77,7 +76,7 @@ gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, * Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding * zones. */ - if (*phys_mask <= DMA_BIT_MASK(ARCH_ZONE_DMA_BITS)) + if (*phys_mask <= DMA_BIT_MASK(zone_dma_bits)) return GFP_DMA; if (*phys_mask <= DMA_BIT_MASK(32)) return GFP_DMA32; @@ -547,7 +546,7 @@ int dma_direct_supported(struct device *dev, u64 mask) u64 min_mask; if (IS_ENABLED(CONFIG_ZONE_DMA)) - min_mask = DMA_BIT_MASK(ARCH_ZONE_DMA_BITS); + min_mask = DMA_BIT_MASK(zone_dma_bits); else min_mask = DMA_BIT_MASK(32);