From patchwork Tue Feb 9 06:21:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B627C433E0 for ; Tue, 9 Feb 2021 06:22:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D4A664E54 for ; Tue, 9 Feb 2021 06:22:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229639AbhBIGWk (ORCPT ); Tue, 9 Feb 2021 01:22:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbhBIGWb (ORCPT ); Tue, 9 Feb 2021 01:22:31 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5897FC06178B for ; Mon, 8 Feb 2021 22:21:51 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id fa16so985618pjb.1 for ; Mon, 08 Feb 2021 22:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VQjRLnhkPorvAVKaOPejw74RXuDyQjfs10AJ4S/GkGY=; b=iIl5SP7dar66QYJx01a8eKBzk/OkK/x97Udeqkp8z6Zecwu6gJbtr4+R7AJhbJUsET mW/NYKe+92F4cTf+MWb8Kjs3DEc/fRUB2Tuk/IK9lw1i5dbH+o71qP4rUdYAStwKxzrf aFKGX0hir3NozqAR+M+IQwDL2zs+YG9OyvX5A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VQjRLnhkPorvAVKaOPejw74RXuDyQjfs10AJ4S/GkGY=; b=AJ7hMnCj8d6Ki3nxDv4JAaXxwvcZvukvJj+Cpz8y59pvEZsMoHmu2NevpQaBfv7hn9 kYRqhhIsL19bhkB/Pjna0RgS1vNI2BCJV7UJR1Icf6nsNUHFRC8WT/XH/jI+NppWcVps 7qU0/6A2hRj+4YkG78e7ImYqQ1fyFLxiy4zccZWhdRGJ0Whw8rh9pS/5U2hoJmkIRPzS CmTNbRjLL29Wx5WGAqlt5Ko5W/AlaOPx+1TAoFCNVqCQ2c65oENx0hxmt58W6RsH1ZJ7 BaFXNVacps3cnimcaWq7+pido9vLL1Auo86zt2ZW57Cix3l5Xlqqoj7BdwAqxNta7WRB Tc3g== X-Gm-Message-State: AOAM532DnAZdQzWThEb95aw2BPUSd13/O37PoCBUaLn39EPtRYOmQuof xA038P9gUigEO2bM7Bvyfe6lwA== X-Google-Smtp-Source: ABdhPJzMs/GVP3LhI2pHnVT9gB2t4EDZQ5t7O4c7u18G3hsJLKUa+IhCtg1SpLtI9Cft7lrwlj4fPw== X-Received: by 2002:a17:90a:ad09:: with SMTP id r9mr2555446pjq.51.1612851710886; Mon, 08 Feb 2021 22:21:50 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id y26sm21067426pgk.42.2021.02.08.22.21.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:21:50 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 01/14] swiotlb: Remove external access to io_tlb_start Date: Tue, 9 Feb 2021 14:21:18 +0800 Message-Id: <20210209062131.2300005-2-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add a new function, get_swiotlb_start(), and remove external access to io_tlb_start, so we can entirely hide struct swiotlb inside of swiotlb.c in the following patches. Signed-off-by: Claire Chang --- arch/powerpc/platforms/pseries/svm.c | 4 ++-- drivers/xen/swiotlb-xen.c | 4 ++-- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 5 +++++ 4 files changed, 10 insertions(+), 4 deletions(-) -- This can be dropped if Christoph's swiotlb cleanups are landed. https://lore.kernel.org/linux-iommu/20210207160934.2955931-1-hch@lst.de/T/#m7124f29b6076d462101fcff6433295157621da09 2.30.0.478.g8a0d178c01-goog diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 7b739cc7a8a9..c10c51d49f3d 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -55,8 +55,8 @@ void __init svm_swiotlb_init(void) if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false)) return; - if (io_tlb_start) - memblock_free_early(io_tlb_start, + if (vstart) + memblock_free_early(vstart, PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); panic("SVM: Cannot allocate SWIOTLB buffer"); } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..91f8c68d1a9b 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early) /* * IO TLB memory already allocated. Just use it. */ - if (io_tlb_start != 0) { - xen_io_tlb_start = phys_to_virt(io_tlb_start); + if (is_swiotlb_active()) { + xen_io_tlb_start = phys_to_virt(get_swiotlb_start()); goto end; } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index d9c9fc9ca5d2..83200f3b042a 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -81,6 +81,7 @@ void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); bool is_swiotlb_active(void); +phys_addr_t get_swiotlb_start(void); void __init swiotlb_adjust_size(unsigned long new_size); #else #define swiotlb_force SWIOTLB_NO_FORCE diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 7c42df6e6100..e180211f6ad9 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -719,6 +719,11 @@ bool is_swiotlb_active(void) return io_tlb_end != 0; } +phys_addr_t get_swiotlb_start(void) +{ + return io_tlb_start; +} + #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) From patchwork Tue Feb 9 06:21:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B366AC433E0 for ; Tue, 9 Feb 2021 06:23:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7187564EBC for ; Tue, 9 Feb 2021 06:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230051AbhBIGXe (ORCPT ); Tue, 9 Feb 2021 01:23:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230054AbhBIGXQ (ORCPT ); Tue, 9 Feb 2021 01:23:16 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67967C0617A7 for ; Mon, 8 Feb 2021 22:22:06 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id u11so9150291plg.13 for ; Mon, 08 Feb 2021 22:22:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XD+CQBaV23BDlNnO+PpvLMFfqVc2yOMxmH+MUGKPs6w=; b=D8UmSO/ERP7kOKUSbGuHANSz9VFDE05biNMmXqRTUbET3+rkQ5LLNdZr4BHcWQ7G7g mYSYnczi/RbEbwT/nkTNlbF/EFD+y/rGOnS3U4Dd+yM4HqpkpAWccFAu6G2OdcYUfka0 rFGRq0QsZwzAp4bjrdVueBqqe1LrrbcHvK6uA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XD+CQBaV23BDlNnO+PpvLMFfqVc2yOMxmH+MUGKPs6w=; b=jqTVWbX2k/kklg36mRn+ijZ6lPa/AtFSgJQMKYwJALay67vf2mJvxu+GpzB/JSfRm+ ODk/dTDMDwfZ2v2h1CMPcZYUEjvNfqKM+0xvE+f2W6nrZsqo5jq6Ta1V/k9liOJ76E9/ WhIH5M+9b/8qSPJFXYpThaFUuForKfC9LuPB10yqZyR2HJjGyw/+aERmTatWDd0H+0T9 W43p/UtJxbAtXA5rRkGk7zh0d5VP2PeJTC0rXu6A4xioc4DfVDynLy+63dl4kmgfhnMm DKk1hloNCXo6qiMEqjNLafxDVZsCb0K2K4yHLEFGY/2Bmt8tLPqkhOCUOIaeRbOQ62I/ vTzw== X-Gm-Message-State: AOAM530Q54u54SwDH4yqoX8/k/t9Ldr9xIE/UsU1QQWjGkCi5bLuZuOG WjnLTpwc4MwadCbdQH6Bo+qDgg== X-Google-Smtp-Source: ABdhPJwiCoGV2Q0lgiH5vbozu5JoZAEHZGIKhYvfLKOhwI17DJRAwojDGzNQdyAdFGBmg/ai3bJumQ== X-Received: by 2002:a17:90a:3d42:: with SMTP id o2mr2498572pjf.173.1612851725714; Mon, 08 Feb 2021 22:22:05 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id b25sm13766245pfp.26.2021.02.08.22.21.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:04 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 03/14] swiotlb: Add struct swiotlb Date: Tue, 9 Feb 2021 14:21:20 +0800 Message-Id: <20210209062131.2300005-4-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Added a new struct, swiotlb, as the IO TLB memory pool descriptor and moved relevant global variables into that struct. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 327 +++++++++++++++++++++++-------------------- 1 file changed, 172 insertions(+), 155 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 678490d39e55..28b7bfe7a2a8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -61,33 +61,43 @@ * allocate a contiguous 1MB, we're probably in trouble anyway. */ #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) +#define INVALID_PHYS_ADDR (~(phys_addr_t)0) enum swiotlb_force swiotlb_force; /* - * Used to do a quick range check in swiotlb_tbl_unmap_single and - * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this - * API. - */ -static phys_addr_t io_tlb_start, io_tlb_end; - -/* - * The number of IO TLB blocks (in groups of 64) between io_tlb_start and - * io_tlb_end. This is command line adjustable via setup_io_tlb_npages. - */ -static unsigned long io_tlb_nslabs; - -/* - * The number of used IO TLB block - */ -static unsigned long io_tlb_used; - -/* - * This is a free list describing the number of free entries available from - * each index + * struct swiotlb - Software IO TLB Memory Pool Descriptor + * + * @start: The start address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @end: The end address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and + * @end. This is command line adjustable via setup_io_tlb_npages. + * @used: The number of used IO TLB block. + * @list: The free list describing the number of free entries available + * from each index. + * @index: The index to start searching in the next round. + * @orig_addr: The original address corresponding to a mapped entry for the + * sync operations. + * @lock: The lock to protect the above data structures in the map and + * unmap calls. + * @debugfs: The dentry to debugfs. */ -static unsigned int *io_tlb_list; -static unsigned int io_tlb_index; +struct swiotlb { + phys_addr_t start; + phys_addr_t end; + unsigned long nslabs; + unsigned long used; + unsigned int *list; + unsigned int index; + phys_addr_t *orig_addr; + spinlock_t lock; + struct dentry *debugfs; +}; +static struct swiotlb default_swiotlb; /* * Max segment that we can provide which (if pages are contingous) will @@ -95,27 +105,17 @@ static unsigned int io_tlb_index; */ static unsigned int max_segment; -/* - * We need to save away the original address corresponding to a mapped entry - * for the sync operations. - */ -#define INVALID_PHYS_ADDR (~(phys_addr_t)0) -static phys_addr_t *io_tlb_orig_addr; - -/* - * Protect the above data structures in the map and unmap calls - */ -static DEFINE_SPINLOCK(io_tlb_lock); - static int late_alloc; static int __init setup_io_tlb_npages(char *str) { + struct swiotlb *swiotlb = &default_swiotlb; + if (isdigit(*str)) { - io_tlb_nslabs = simple_strtoul(str, &str, 0); + swiotlb->nslabs = simple_strtoul(str, &str, 0); /* avoid tail segment of size < IO_TLB_SEGSIZE */ - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); } if (*str == ',') ++str; @@ -123,7 +123,7 @@ setup_io_tlb_npages(char *str) swiotlb_force = SWIOTLB_FORCE; } else if (!strcmp(str, "noforce")) { swiotlb_force = SWIOTLB_NO_FORCE; - io_tlb_nslabs = 1; + swiotlb->nslabs = 1; } return 0; @@ -134,7 +134,7 @@ static bool no_iotlb_memory; unsigned long swiotlb_nr_tbl(void) { - return unlikely(no_iotlb_memory) ? 0 : io_tlb_nslabs; + return unlikely(no_iotlb_memory) ? 0 : default_swiotlb.nslabs; } EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); @@ -156,13 +156,14 @@ unsigned long swiotlb_size_or_default(void) { unsigned long size; - size = io_tlb_nslabs << IO_TLB_SHIFT; + size = default_swiotlb.nslabs << IO_TLB_SHIFT; return size ? size : (IO_TLB_DEFAULT_SIZE); } void __init swiotlb_adjust_size(unsigned long new_size) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long size; /* @@ -170,10 +171,10 @@ void __init swiotlb_adjust_size(unsigned long new_size) * architectures such as those supporting memory encryption to * adjust/expand SWIOTLB size for their use. */ - if (!io_tlb_nslabs) { + if (!swiotlb->nslabs) { size = ALIGN(new_size, 1 << IO_TLB_SHIFT); - io_tlb_nslabs = size >> IO_TLB_SHIFT; - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + swiotlb->nslabs = size >> IO_TLB_SHIFT; + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } @@ -181,14 +182,15 @@ void __init swiotlb_adjust_size(unsigned long new_size) void swiotlb_print_info(void) { - unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT; + struct swiotlb *swiotlb = &default_swiotlb; + unsigned long bytes = swiotlb->nslabs << IO_TLB_SHIFT; if (no_iotlb_memory) { pr_warn("No low mem\n"); return; } - pr_info("mapped [mem %pa-%pa] (%luMB)\n", &io_tlb_start, &io_tlb_end, + pr_info("mapped [mem %pa-%pa] (%luMB)\n", &swiotlb->start, &swiotlb->end, bytes >> 20); } @@ -200,57 +202,61 @@ void swiotlb_print_info(void) */ void __init swiotlb_update_mem_attributes(void) { + struct swiotlb *swiotlb = &default_swiotlb; void *vaddr; unsigned long bytes; if (no_iotlb_memory || late_alloc) return; - vaddr = phys_to_virt(io_tlb_start); - bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); + vaddr = phys_to_virt(swiotlb->start); + bytes = PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); memset(vaddr, 0, bytes); } int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long i, bytes; size_t alloc_size; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = __pa(tlb); - io_tlb_end = io_tlb_start + bytes; + swiotlb->nslabs = nslabs; + swiotlb->start = __pa(tlb); + swiotlb->end = swiotlb->start + bytes; /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between swiotlb->start and swiotlb->end. */ - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int)); - io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_list) + alloc_size = PAGE_ALIGN(swiotlb->nslabs * sizeof(int)); + swiotlb->list = memblock_alloc(alloc_size, PAGE_SIZE); + if (!swiotlb->list) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)); - io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_orig_addr) + alloc_size = PAGE_ALIGN(swiotlb->nslabs * sizeof(phys_addr_t)); + swiotlb->orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); + if (!swiotlb->orig_addr) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + for (i = 0; i < swiotlb->nslabs; i++) { + swiotlb->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } - io_tlb_index = 0; + swiotlb->index = 0; no_iotlb_memory = false; if (verbose) swiotlb_print_info(); - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT); + spin_lock_init(&swiotlb->lock); + return 0; } @@ -261,26 +267,27 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) void __init swiotlb_init(int verbose) { + struct swiotlb *swiotlb = &default_swiotlb; size_t default_size = IO_TLB_DEFAULT_SIZE; unsigned char *vstart; unsigned long bytes; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!swiotlb->nslabs) { + swiotlb->nslabs = (default_size >> IO_TLB_SHIFT); + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); } - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + bytes = swiotlb->nslabs << IO_TLB_SHIFT; /* Get IO TLB memory from the low pages */ vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); - if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) + if (vstart && !swiotlb_init_with_tbl(vstart, swiotlb->nslabs, verbose)) return; - if (io_tlb_start) { - memblock_free_early(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); - io_tlb_start = 0; + if (swiotlb->start) { + memblock_free_early(swiotlb->start, + PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT)); + swiotlb->start = 0; } pr_warn("Cannot allocate buffer"); no_iotlb_memory = true; @@ -294,22 +301,23 @@ swiotlb_init(int verbose) int swiotlb_late_init_with_default_size(size_t default_size) { - unsigned long bytes, req_nslabs = io_tlb_nslabs; + struct swiotlb *swiotlb = &default_swiotlb; + unsigned long bytes, req_nslabs = swiotlb->nslabs; unsigned char *vstart = NULL; unsigned int order; int rc = 0; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!swiotlb->nslabs) { + swiotlb->nslabs = (default_size >> IO_TLB_SHIFT); + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); } /* * Get IO TLB memory from the low pages */ - order = get_order(io_tlb_nslabs << IO_TLB_SHIFT); - io_tlb_nslabs = SLABS_PER_PAGE << order; - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + order = get_order(swiotlb->nslabs << IO_TLB_SHIFT); + swiotlb->nslabs = SLABS_PER_PAGE << order; + bytes = swiotlb->nslabs << IO_TLB_SHIFT; while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, @@ -320,15 +328,15 @@ swiotlb_late_init_with_default_size(size_t default_size) } if (!vstart) { - io_tlb_nslabs = req_nslabs; + swiotlb->nslabs = req_nslabs; return -ENOMEM; } if (order != get_order(bytes)) { pr_warn("only able to allocate %ld MB\n", (PAGE_SIZE << order) >> 20); - io_tlb_nslabs = SLABS_PER_PAGE << order; + swiotlb->nslabs = SLABS_PER_PAGE << order; } - rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs); + rc = swiotlb_late_init_with_tbl(vstart, swiotlb->nslabs); if (rc) free_pages((unsigned long)vstart, order); @@ -337,22 +345,25 @@ swiotlb_late_init_with_default_size(size_t default_size) static void swiotlb_cleanup(void) { - io_tlb_end = 0; - io_tlb_start = 0; - io_tlb_nslabs = 0; + struct swiotlb *swiotlb = &default_swiotlb; + + swiotlb->end = 0; + swiotlb->start = 0; + swiotlb->nslabs = 0; max_segment = 0; } int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long i, bytes; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = virt_to_phys(tlb); - io_tlb_end = io_tlb_start + bytes; + swiotlb->nslabs = nslabs; + swiotlb->start = virt_to_phys(tlb); + swiotlb->end = swiotlb->start + bytes; set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); memset(tlb, 0, bytes); @@ -360,39 +371,40 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between swiotlb->start and swiotlb->end. */ - io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * sizeof(int))); - if (!io_tlb_list) + swiotlb->list = (unsigned int *)__get_free_pages(GFP_KERNEL, + get_order(swiotlb->nslabs * sizeof(int))); + if (!swiotlb->list) goto cleanup3; - io_tlb_orig_addr = (phys_addr_t *) + swiotlb->orig_addr = (phys_addr_t *) __get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * + get_order(swiotlb->nslabs * sizeof(phys_addr_t))); - if (!io_tlb_orig_addr) + if (!swiotlb->orig_addr) goto cleanup4; - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + for (i = 0; i < swiotlb->nslabs; i++) { + swiotlb->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } - io_tlb_index = 0; + swiotlb->index = 0; no_iotlb_memory = false; swiotlb_print_info(); late_alloc = 1; - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT); + spin_lock_init(&swiotlb->lock); return 0; cleanup4: - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - io_tlb_list = NULL; + free_pages((unsigned long)swiotlb->list, + get_order(swiotlb->nslabs * sizeof(int))); + swiotlb->list = NULL; cleanup3: swiotlb_cleanup(); return -ENOMEM; @@ -400,23 +412,25 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) void __init swiotlb_exit(void) { - if (!io_tlb_orig_addr) + struct swiotlb *swiotlb = &default_swiotlb; + + if (!swiotlb->orig_addr) return; if (late_alloc) { - free_pages((unsigned long)io_tlb_orig_addr, - get_order(io_tlb_nslabs * sizeof(phys_addr_t))); - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - free_pages((unsigned long)phys_to_virt(io_tlb_start), - get_order(io_tlb_nslabs << IO_TLB_SHIFT)); + free_pages((unsigned long)swiotlb->orig_addr, + get_order(swiotlb->nslabs * sizeof(phys_addr_t))); + free_pages((unsigned long)swiotlb->list, + get_order(swiotlb->nslabs * sizeof(int))); + free_pages((unsigned long)phys_to_virt(swiotlb->start), + get_order(swiotlb->nslabs << IO_TLB_SHIFT)); } else { - memblock_free_late(__pa(io_tlb_orig_addr), - PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t))); - memblock_free_late(__pa(io_tlb_list), - PAGE_ALIGN(io_tlb_nslabs * sizeof(int))); - memblock_free_late(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + memblock_free_late(__pa(swiotlb->orig_addr), + PAGE_ALIGN(swiotlb->nslabs * sizeof(phys_addr_t))); + memblock_free_late(__pa(swiotlb->list), + PAGE_ALIGN(swiotlb->nslabs * sizeof(int))); + memblock_free_late(swiotlb->start, + PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT)); } swiotlb_cleanup(); } @@ -465,7 +479,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start); + struct swiotlb *swiotlb = &default_swiotlb; + dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start); unsigned long flags; phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; @@ -516,13 +531,13 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * Find suitable number of IO TLB entries size that will fit this * request and allocate a buffer from that IO TLB pool. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&swiotlb->lock, flags); - if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) + if (unlikely(nslots > swiotlb->nslabs - swiotlb->used)) goto not_found; - index = ALIGN(io_tlb_index, stride); - if (index >= io_tlb_nslabs) + index = ALIGN(swiotlb->index, stride); + if (index >= swiotlb->nslabs) index = 0; wrap = index; @@ -530,7 +545,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, while (iommu_is_span_boundary(index, nslots, offset_slots, max_slots)) { index += stride; - if (index >= io_tlb_nslabs) + if (index >= swiotlb->nslabs) index = 0; if (index == wrap) goto not_found; @@ -541,40 +556,40 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * contiguous buffers, we allocate the buffers from that slot * and mark the entries as '0' indicating unavailable. */ - if (io_tlb_list[index] >= nslots) { + if (swiotlb->list[index] >= nslots) { int count = 0; for (i = index; i < (int) (index + nslots); i++) - io_tlb_list[i] = 0; - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; - tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT); + swiotlb->list[i] = 0; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && swiotlb->list[i]; i--) + swiotlb->list[i] = ++count; + tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT); /* * Update the indices to avoid searching in the next * round. */ - io_tlb_index = ((index + nslots) < io_tlb_nslabs - ? (index + nslots) : 0); + swiotlb->index = ((index + nslots) < swiotlb->nslabs + ? (index + nslots) : 0); goto found; } index += stride; - if (index >= io_tlb_nslabs) + if (index >= swiotlb->nslabs) index = 0; } while (index != wrap); not_found: - tmp_io_tlb_used = io_tlb_used; + tmp_io_tlb_used = swiotlb->used; - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&swiotlb->lock, flags); if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", - alloc_size, io_tlb_nslabs, tmp_io_tlb_used); + alloc_size, swiotlb->nslabs, tmp_io_tlb_used); return (phys_addr_t)DMA_MAPPING_ERROR; found: - io_tlb_used += nslots; - spin_unlock_irqrestore(&io_tlb_lock, flags); + swiotlb->used += nslots; + spin_unlock_irqrestore(&swiotlb->lock, flags); /* * Save away the mapping from the original address to the DMA address. @@ -582,7 +597,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * needed. */ for (i = 0; i < nslots; i++) - io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); + swiotlb->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); @@ -597,10 +612,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long flags; int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = swiotlb->orig_addr[index]; /* * First, sync the memory before unmapping the entry @@ -616,36 +632,37 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * While returning the entries to the free list, we merge the entries * with slots below and above the pool being returned. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&swiotlb->lock, flags); { count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ? - io_tlb_list[index + nslots] : 0); + swiotlb->list[index + nslots] : 0); /* * Step 1: return the slots to the free list, merging the * slots with superceeding slots */ for (i = index + nslots - 1; i >= index; i--) { - io_tlb_list[i] = ++count; - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + swiotlb->list[i] = ++count; + swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } /* * Step 2: merge the returned slots with the preceding slots, * if available (non zero) */ - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && swiotlb->list[i]; i--) + swiotlb->list[i] = ++count; - io_tlb_used -= nslots; + swiotlb->used -= nslots; } - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&swiotlb->lock, flags); } void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + struct swiotlb *swiotlb = &default_swiotlb; + int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = swiotlb->orig_addr[index]; if (orig_addr == INVALID_PHYS_ADDR) return; @@ -713,31 +730,31 @@ size_t swiotlb_max_mapping_size(struct device *dev) bool is_swiotlb_active(void) { /* - * When SWIOTLB is initialized, even if io_tlb_start points to physical - * address zero, io_tlb_end surely doesn't. + * When SWIOTLB is initialized, even if swiotlb->start points to + * physical address zero, swiotlb->end surely doesn't. */ - return io_tlb_end != 0; + return default_swiotlb.end != 0; } bool is_swiotlb_buffer(phys_addr_t paddr) { - return paddr >= io_tlb_start && paddr < io_tlb_end; + return paddr >= default_swiotlb.start && paddr < default_swiotlb.end; } phys_addr_t get_swiotlb_start(void) { - return io_tlb_start; + return default_swiotlb.start; } #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) { - struct dentry *root; + struct swiotlb *swiotlb = &default_swiotlb; - root = debugfs_create_dir("swiotlb", NULL); - debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs); - debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used); + swiotlb->debugfs = debugfs_create_dir("swiotlb", NULL); + debugfs_create_ulong("io_tlb_nslabs", 0400, swiotlb->debugfs, &swiotlb->nslabs); + debugfs_create_ulong("io_tlb_used", 0400, swiotlb->debugfs, &swiotlb->used); return 0; } From patchwork Tue Feb 9 06:21:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5080DC433E0 for ; Tue, 9 Feb 2021 06:24:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BFEC64EBA for ; Tue, 9 Feb 2021 06:24:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229704AbhBIGXo (ORCPT ); Tue, 9 Feb 2021 01:23:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230080AbhBIGXR (ORCPT ); Tue, 9 Feb 2021 01:23:17 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0023BC0617AA for ; Mon, 8 Feb 2021 22:22:12 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id 18so9109916pfz.3 for ; Mon, 08 Feb 2021 22:22:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WvKUPmchuhcE1zfso8bHoA3z2cJUHyqor/xynDrdSzc=; b=WmMoRPfwVrKkMTfzfSeApWYOsANXGiOgA3320pKWhEerXc+d04kio6kBo6Samikmhh dKY6FAm9apY748uamOD6+T+Q7zkEWfp30SA3fCJNtSNB7CjdS5M+/v04eEEHZ9JPhbQU oGnPa5k409EctotY2FmiW01VL+4N5+/MQmqlc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WvKUPmchuhcE1zfso8bHoA3z2cJUHyqor/xynDrdSzc=; b=XvJ4ZI606TYyGChZ51TiSXXRRpYPloiVltrW+lCo1MTZANty9yKFIOiP/2+YH+pnxL gY7Yw3rrn+sZJeTWPIusz3hg4/nM7HjZVFAUAwTrN0fZ5wixpKysmEte69m0ZWOi8UaM V0jgTFtCiLxW1X7XztapEhNIPC1MjXkMGobqzNubMKyh21g/DCa07Er5Q+/aGNUjGleP n04QHMVz+NrHw+rKEhXDFF+bqA4zsAoJ+WIs2ACreP9W3xyLOvS1cmCDYFV1Htce3iRG nrPnKYTS7O65EOlZQPMvQciCZ7TVOFKqptSCQsD2Vm76kaWJ+MHYeX99By3m76DEh567 4N9Q== X-Gm-Message-State: AOAM533cJAPtaCX5RIKJg/EiFn/Hcez/QDEvelIiRDEXMGpVbR+yfMzC 8dHvPMuYtLM8gGQGNkQWiNxqNg== X-Google-Smtp-Source: ABdhPJwPilLMeHO4izUWlTNwHOL/6AulGQdpqWdqLrBPBm35gr6jqlo9rCgrJT+lklmP7VdqmUgqQg== X-Received: by 2002:a63:80c8:: with SMTP id j191mr1570888pgd.58.1612851732543; Mon, 08 Feb 2021 22:22:12 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id w1sm14605147pfg.116.2021.02.08.22.22.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:11 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 04/14] swiotlb: Refactor swiotlb_late_init_with_tbl Date: Tue, 9 Feb 2021 14:21:21 +0800 Message-Id: <20210209062131.2300005-5-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Refactor swiotlb_late_init_with_tbl to make the code reusable for restricted DMA pool initialization. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 65 ++++++++++++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 23 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 28b7bfe7a2a8..dc37951c6924 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -353,20 +353,21 @@ static void swiotlb_cleanup(void) max_segment = 0; } -int -swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) +static int swiotlb_init_tlb_pool(struct swiotlb *swiotlb, phys_addr_t start, + size_t size) { - struct swiotlb *swiotlb = &default_swiotlb; - unsigned long i, bytes; + unsigned long i; + void *vaddr = phys_to_virt(start); - bytes = nslabs << IO_TLB_SHIFT; + size = ALIGN(size, 1 << IO_TLB_SHIFT); + swiotlb->nslabs = size >> IO_TLB_SHIFT; + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); - swiotlb->nslabs = nslabs; - swiotlb->start = virt_to_phys(tlb); - swiotlb->end = swiotlb->start + bytes; + swiotlb->start = start; + swiotlb->end = swiotlb->start + size; - set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); - memset(tlb, 0, bytes); + set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT); + memset(vaddr, 0, size); /* * Allocate and initialize the free list array. This array is used @@ -390,13 +391,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } swiotlb->index = 0; - no_iotlb_memory = false; - - swiotlb_print_info(); - late_alloc = 1; - - swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT); spin_lock_init(&swiotlb->lock); return 0; @@ -410,6 +405,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) return -ENOMEM; } +int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) +{ + struct swiotlb *swiotlb = &default_swiotlb; + unsigned long bytes = nslabs << IO_TLB_SHIFT; + int ret; + + ret = swiotlb_init_tlb_pool(swiotlb, virt_to_phys(tlb), bytes); + if (ret) + return ret; + + no_iotlb_memory = false; + + swiotlb_print_info(); + + late_alloc = 1; + + swiotlb_set_max_segment(bytes); + + return 0; +} + void __init swiotlb_exit(void) { struct swiotlb *swiotlb = &default_swiotlb; @@ -747,17 +763,20 @@ phys_addr_t get_swiotlb_start(void) } #ifdef CONFIG_DEBUG_FS - -static int __init swiotlb_create_debugfs(void) +static void swiotlb_create_debugfs(struct swiotlb *swiotlb, const char *name, + struct dentry *node) { - struct swiotlb *swiotlb = &default_swiotlb; - - swiotlb->debugfs = debugfs_create_dir("swiotlb", NULL); + swiotlb->debugfs = debugfs_create_dir(name, node); debugfs_create_ulong("io_tlb_nslabs", 0400, swiotlb->debugfs, &swiotlb->nslabs); debugfs_create_ulong("io_tlb_used", 0400, swiotlb->debugfs, &swiotlb->used); - return 0; } -late_initcall(swiotlb_create_debugfs); +static int __init swiotlb_create_default_debugfs(void) +{ + swiotlb_create_debugfs(&default_swiotlb, "swiotlb", NULL); + + return 0; +} +late_initcall(swiotlb_create_default_debugfs); #endif From patchwork Tue Feb 9 06:21:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59D5FC433E0 for ; Tue, 9 Feb 2021 06:24:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C47164EBC for ; Tue, 9 Feb 2021 06:24:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230138AbhBIGYM (ORCPT ); Tue, 9 Feb 2021 01:24:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230108AbhBIGXT (ORCPT ); Tue, 9 Feb 2021 01:23:19 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59AC5C06121D for ; Mon, 8 Feb 2021 22:22:27 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id d13so9181702plg.0 for ; Mon, 08 Feb 2021 22:22:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CxLFhhx42ZtNAWWPxxTC2YTMXHtyxeDPJClSNQiYoYc=; b=FOjc62yoUhk/LYUclkinZZ4zDvKpeGo1x74CjLhqDxql67icfdZqyBr48BDMmzkMqR tA+flicJ3qCGmrksh4BzQ5Am4hl3t7sFnye0tmSkf0JhPfYSoJrQn4ouvbgIPDxqcZT9 22NHFR6mKzgKsRHbHqBZhVhBnyuYHLbdeoeHw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CxLFhhx42ZtNAWWPxxTC2YTMXHtyxeDPJClSNQiYoYc=; b=V8tsgSJifyRUYgcWIwj+nnIaJ4NoN5NeyfmOyc5XAdIfgIJvozz5/SAsZjnXH3kzrM u+u+72g/YPCXz4ZEDG9tVSwskLu8K2qvwhCsokUT0QfaUgvtcOfLdPJ7/l5L/WTEKVF5 HV8OX3/rBqPszdZbai9l0oMGYtLKLP+0kpTpp9GKVvHmloXlQ1tyU47/sW7WrHjpUwku vBWTAewSnmPZjHxPb5VVeZsxRW0DebtKcM8WE4JaoHBvTqi5gpnVRkW2vcql8Mx9O3OV 9UwE7E0E1cnE5mGR3THt371o8kY3wO1IRuAo22/+TqqmPMd+K+s2x85Z4vDL6lTy9wC3 Tpzg== X-Gm-Message-State: AOAM531N0AKJj2haqy9Y2RCZyOTEUTiMi9EhYPOLvwy1DjVeKqAgUbnz LE0S4ujGYwC7sGR4+0+Qw7xOJg== X-Google-Smtp-Source: ABdhPJwDHlCULV4tTgyY9CKwnf/NbNTRufsz7Pi36U/uwqUNeuQrkhr5EaPoNzogQlT61PzYunqO5A== X-Received: by 2002:a17:90a:3188:: with SMTP id j8mr2559404pjb.53.1612851746343; Mon, 08 Feb 2021 22:22:26 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id np7sm1080411pjb.10.2021.02.08.22.22.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:25 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 06/14] swiotlb: Add restricted DMA pool Date: Tue, 9 Feb 2021 14:21:23 +0800 Message-Id: <20210209062131.2300005-7-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add the initialization function to create restricted DMA pools from matching reserved-memory nodes. Signed-off-by: Claire Chang --- include/linux/device.h | 4 ++ kernel/dma/swiotlb.c | 94 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 97 insertions(+), 1 deletion(-) diff --git a/include/linux/device.h b/include/linux/device.h index 7619a84f8ce4..08d440627b93 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -415,6 +415,7 @@ struct dev_links_info { * @dma_pools: Dma pools (if dma'ble device). * @dma_mem: Internal for coherent mem override. * @cma_area: Contiguous memory area for dma allocations + * @dev_swiotlb: Internal for swiotlb override. * @archdata: For arch-specific additions. * @of_node: Associated device tree node. * @fwnode: Associated device node supplied by platform firmware. @@ -517,6 +518,9 @@ struct device { #ifdef CONFIG_DMA_CMA struct cma *cma_area; /* contiguous memory area for dma allocations */ +#endif +#ifdef CONFIG_DMA_RESTRICTED_POOL + struct swiotlb *dev_swiotlb; #endif /* arch specific additions */ struct dev_archdata archdata; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index dc37951c6924..3a17451c5981 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -39,6 +39,13 @@ #ifdef CONFIG_DEBUG_FS #include #endif +#ifdef CONFIG_DMA_RESTRICTED_POOL +#include +#include +#include +#include +#include +#endif #include #include @@ -75,7 +82,8 @@ enum swiotlb_force swiotlb_force; * range check to see if the memory was in fact allocated by this * API. * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and - * @end. This is command line adjustable via setup_io_tlb_npages. + * @end. For default swiotlb, this is command line adjustable via + * setup_io_tlb_npages. * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. @@ -780,3 +788,87 @@ static int __init swiotlb_create_default_debugfs(void) late_initcall(swiotlb_create_default_debugfs); #endif + +#ifdef CONFIG_DMA_RESTRICTED_POOL +static int rmem_swiotlb_device_init(struct reserved_mem *rmem, + struct device *dev) +{ + struct swiotlb *swiotlb = rmem->priv; + int ret; + + if (dev->dev_swiotlb) + return -EBUSY; + + /* Since multiple devices can share the same pool, the private data, + * swiotlb struct, will be initialized by the first device attached + * to it. + */ + if (!swiotlb) { + swiotlb = kzalloc(sizeof(*swiotlb), GFP_KERNEL); + if (!swiotlb) + return -ENOMEM; +#ifdef CONFIG_ARM + unsigned long pfn = PHYS_PFN(reme->base); + + if (!PageHighMem(pfn_to_page(pfn))) { + ret = -EINVAL; + goto cleanup; + } +#endif /* CONFIG_ARM */ + + ret = swiotlb_init_tlb_pool(swiotlb, rmem->base, rmem->size); + if (ret) + goto cleanup; + + rmem->priv = swiotlb; + } + +#ifdef CONFIG_DEBUG_FS + swiotlb_create_debugfs(swiotlb, rmem->name, default_swiotlb.debugfs); +#endif /* CONFIG_DEBUG_FS */ + + dev->dev_swiotlb = swiotlb; + + return 0; + +cleanup: + kfree(swiotlb); + + return ret; +} + +static void rmem_swiotlb_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + if (!dev) + return; + +#ifdef CONFIG_DEBUG_FS + debugfs_remove_recursive(dev->dev_swiotlb->debugfs); +#endif /* CONFIG_DEBUG_FS */ + dev->dev_swiotlb = NULL; +} + +static const struct reserved_mem_ops rmem_swiotlb_ops = { + .device_init = rmem_swiotlb_device_init, + .device_release = rmem_swiotlb_device_release, +}; + +static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "linux,cma-default", NULL) || + of_get_flat_dt_prop(node, "linux,dma-default", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + rmem->ops = &rmem_swiotlb_ops; + pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} + +RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup); +#endif /* CONFIG_DMA_RESTRICTED_POOL */ From patchwork Tue Feb 9 06:21:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70E81C433DB for ; Tue, 9 Feb 2021 06:25:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3280F64EB9 for ; Tue, 9 Feb 2021 06:25:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbhBIGYh (ORCPT ); Tue, 9 Feb 2021 01:24:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229941AbhBIGX5 (ORCPT ); Tue, 9 Feb 2021 01:23:57 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A60B1C061788 for ; Mon, 8 Feb 2021 22:22:40 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id cv23so972161pjb.5 for ; Mon, 08 Feb 2021 22:22:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KUcKCdqxUtmE3c3/sCLCFX58+aEPLItGYu5wwEcivTc=; b=NhEm9T2AwADa9VsvefRZO3hJf81eglORlqEc/M/a8rR0kuSIY5UrlWGbaksqIVirGr J6QDJ3CyNkeplzaOrje+vpzgt129yWbwHTzlb+beGZFWZQ8LQ0tH754TJy6h0q3jS8jo cGxvFhbRKlBHTIBko/408oiJovV3JTKwJW8QU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KUcKCdqxUtmE3c3/sCLCFX58+aEPLItGYu5wwEcivTc=; b=JCYzEdpgCFOj2YC0MjwR/IL8xRyvB/XSWgsY50MOMVdNuioN1obxQjya0VeXa6LjmF /JWSGEhIGjcKP1rZJ62HDHJJTDOd3BAYt5wmK8FvvxrG317QGn9l7uRP383Xxfauyxy3 QONRSOz5Or4e/u/6trUYFCyTi1sLAUOBomWrr9GIJofeKubjO5Q2MHEZy5V7NGruAU12 tpwlDEbK6h6YzBRq7tDyWMMyR/Og18RqeS4EKP65NDO0TsUlIBXq8v+rBw7mSH3LHWh0 C6adK3YRl3fj4iG5V7sD2auqkOXj4lSQwDGyA4AKSceHMwZHb3F/PKEmIn+4o8myBmNB 0s4Q== X-Gm-Message-State: AOAM53310YzmsLKGa40xXHUcERIJTJW00wxj1ZyzgJdWbVlo/5Fqxl6q NK8VSnpB41ezLWg6+PClrOf7Ww== X-Google-Smtp-Source: ABdhPJxTkbyFhURADasjsjB42g8YIEdg4z+Ug1Qla2hS5HdHoxmM0uHRHvdpONYXgEmtPzgr5O1D4Q== X-Received: by 2002:a17:90a:49c4:: with SMTP id l4mr2647464pjm.33.1612851760260; Mon, 08 Feb 2021 22:22:40 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id x14sm20837364pfj.15.2021.02.08.22.22.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:39 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 08/14] swiotlb: Use restricted DMA pool if available Date: Tue, 9 Feb 2021 14:21:25 +0800 Message-Id: <20210209062131.2300005-9-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 13 +++++++++++++ kernel/dma/direct.h | 2 +- kernel/dma/swiotlb.c | 20 +++++++++++++++++--- 3 files changed, 31 insertions(+), 4 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f13a52a97382..76f86c684524 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -71,6 +71,15 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; +#ifdef CONFIG_DMA_RESTRICTED_POOL +bool is_swiotlb_force(struct device *dev); +#else +static inline bool is_swiotlb_force(struct device *dev) +{ + return unlikely(swiotlb_force == SWIOTLB_FORCE); +} +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr); void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); @@ -80,6 +89,10 @@ phys_addr_t get_swiotlb_start(struct device *dev); void __init swiotlb_adjust_size(unsigned long new_size); #else #define swiotlb_force SWIOTLB_NO_FORCE +static inline bool is_swiotlb_force(struct device *dev) +{ + return false; +} static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 7b83b1595989..b011db1b625d 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) + if (is_swiotlb_force(dev)) return swiotlb_map(dev, phys, size, dir, attrs); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e22e7ae75f1c..6fdebde8fb1f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -40,6 +40,7 @@ #include #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +#include #include #include #include @@ -109,6 +110,10 @@ static struct swiotlb default_swiotlb; static inline struct swiotlb *get_swiotlb(struct device *dev) { +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev && dev->dev_swiotlb) + return dev->dev_swiotlb; +#endif return &default_swiotlb; } @@ -508,7 +513,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct swiotlb *swiotlb = &default_swiotlb; + struct swiotlb *swiotlb = get_swiotlb(hwdev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start); unsigned long flags; phys_addr_t tlb_addr; @@ -519,7 +524,11 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, unsigned long max_slots; unsigned long tmp_io_tlb_used; +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (no_iotlb_memory && !hwdev->dev_swiotlb) +#else if (no_iotlb_memory) +#endif panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); if (mem_encrypt_active()) @@ -641,7 +650,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct swiotlb *swiotlb = &default_swiotlb; + struct swiotlb *swiotlb = get_swiotlb(hwdev); unsigned long flags; int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; @@ -689,7 +698,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - struct swiotlb *swiotlb = &default_swiotlb; + struct swiotlb *swiotlb = get_swiotlb(hwdev); int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = swiotlb->orig_addr[index]; @@ -801,6 +810,11 @@ late_initcall(swiotlb_create_default_debugfs); #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +bool is_swiotlb_force(struct device *dev) +{ + return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb; +} + static int rmem_swiotlb_device_init(struct reserved_mem *rmem, struct device *dev) { From patchwork Tue Feb 9 06:21:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B40A3C433E6 for ; Tue, 9 Feb 2021 06:25:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A9CC64EBC for ; Tue, 9 Feb 2021 06:25:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230047AbhBIGYx (ORCPT ); Tue, 9 Feb 2021 01:24:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230183AbhBIGYJ (ORCPT ); Tue, 9 Feb 2021 01:24:09 -0500 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBCD7C061222 for ; Mon, 8 Feb 2021 22:22:56 -0800 (PST) Received: by mail-pg1-x52a.google.com with SMTP id m2so5090628pgq.5 for ; Mon, 08 Feb 2021 22:22:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=497zjM77pvF3Z9oAZHKRrRRQASb3zykCWrEq06oWuA4=; b=J3JXGtOrm5PoEyvBsVk+eONszRpKlS2e/fHFYsrzg7r41jB4Qhydpz4iHXCx+nR2XB bgZwF53kTc5vzTPgHCJnyyfoRJvrgBHi7UionQnej7Yu1P183TVntz02XzomTxMCHg3G mcXNNavE9b460T2UV+kLxlE9ubP6dRssCo1RI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=497zjM77pvF3Z9oAZHKRrRRQASb3zykCWrEq06oWuA4=; b=fkHkYQvxzzlGDCJFs5NF9/Q0a2LKecJ91fwvBIYGbXh7RaICp1AFYQC5lP8Uc8hGML oLOclFNAI/XR9rkk4FZ6CNCr8Bx49HsDILrDbUEGScCkQlocSffaTenrFZi+E/3mEvDB K6xw4UMWOAu+szIIlttm0XoQTSgCSCk+rARL1a28oC5ZpOeXbvlW5PWXYFfe2dSr8/i9 K/Y7T+fWsvHzwN0oh2cNY3DqnSsqzM5LmMSnPsiiZyWBOAowpPSRUPbq/4J2EAtq2EkY w4PZNO1FIZwVLa5+7CcMfo86s6ZQEigSZvErjRSxdtFEeby5eoORBvn3DFN3nkJjrFVt 3xOw== X-Gm-Message-State: AOAM530teYHwEfwbOwAPMvS9JK327EqD7wAdXMgRSohOUJ2fLKrXDEYW 0nMJWOue2rkMUPPJptjQOCBd+g== X-Google-Smtp-Source: ABdhPJxEw/06walO5wKH5QrYrrWevtYirMnCFq5PBC/eIi86fDszp4wESrmyrR0n6ZgmsKGnPFNlrg== X-Received: by 2002:a63:4a1a:: with SMTP id x26mr21490368pga.260.1612851776396; Mon, 08 Feb 2021 22:22:56 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id x20sm10253509pfn.14.2021.02.08.22.22.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:55 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages() Date: Tue, 9 Feb 2021 14:21:27 +0800 Message-Id: <20210209062131.2300005-11-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add a new wrapper __dma_direct_free_pages() that will be useful later for dev_swiotlb_free(). Signed-off-by: Claire Chang --- kernel/dma/direct.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 30ccbc08e229..a76a1a2f24da 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -75,6 +75,11 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); } +static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size) +{ + dma_free_contiguous(dev, page, size); +} + static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp) { @@ -237,7 +242,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return NULL; } out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -273,7 +278,7 @@ void dma_direct_free(struct device *dev, size_t size, else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); + __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size); } struct page *dma_direct_alloc_pages(struct device *dev, size_t size, @@ -310,7 +315,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -329,7 +334,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)vaddr, 1 << page_order); - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); } #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ From patchwork Tue Feb 9 06:21:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 379527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 446F0C433E6 for ; Tue, 9 Feb 2021 06:24:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06BE864EBC for ; Tue, 9 Feb 2021 06:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230080AbhBIGY1 (ORCPT ); Tue, 9 Feb 2021 01:24:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230171AbhBIGXs (ORCPT ); Tue, 9 Feb 2021 01:23:48 -0500 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1A47C061794 for ; Mon, 8 Feb 2021 22:23:17 -0800 (PST) Received: by mail-pg1-x52b.google.com with SMTP id m2so5091116pgq.5 for ; Mon, 08 Feb 2021 22:23:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eRi8Yzf5Y8qG5991xxXPsSC5LTyacs0FIIEghf5qlqE=; b=I1pOI9d2uMAOj25XXD6wAuAIs02PiqHn3vYbrTGQfRLUalLAaaWHj4AP66p1gZRw6c B3Fs+yS4m4TscPVDCU+jECnoKxyq5OcojOQ3UlsKgJX/2bKZw0g3cUyt8Lb9tyEzB+I7 d4RC5Ezyz3o4oj5hhy8N+DKnKQ48mOYCkG1WM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eRi8Yzf5Y8qG5991xxXPsSC5LTyacs0FIIEghf5qlqE=; b=KtCmPBdNNOdD2LziNWv8hK3/NqhnqC3Qxh+dzbtzN7Tzi6laV7akCaR4YcTSjmG2Gl fJKNtYK+Z7WrdwvtXvk2di6l66txhuVJS6MJ9mDu36tyqSxrqQV6AH7M6Y845GoFfaIF TbNoEZBY+nQSNtU8otgpKrejEISF7NGHIbY/mmFeHp6Ze7P4FIoK+w6fjOEZ0vkZnIN6 s5LnxTlr5bYM1U3mG+GpIMQrH+iPGTO8WlPrKQPrjC3k7GEd/Z13i6Mf2HUdd31nVQjd 2UCBBXvswbQr9l8viy5odgrPzJLIbO8odH9P6fkbtPNdpcuUxa2/lT636M5l0yts9mEB vFgg== X-Gm-Message-State: AOAM530pAhrj7fa9wfqOfIl9DxNMgpKMAc3d/KEZjRrf9jAf68tio9nX lxsFeKlWeQBIUJGPpXBhAvHCSA== X-Google-Smtp-Source: ABdhPJxkt3Ed7T7cJDKzmZe4wszjCxFAf1leiOQT2ICOasIg3V/FKQTBEDaG8y6oYK6K58aSdM216A== X-Received: by 2002:a05:6a00:a8d:b029:1ba:71d1:fe3c with SMTP id b13-20020a056a000a8db02901ba71d1fe3cmr21262398pfl.51.1612851797444; Mon, 08 Feb 2021 22:23:17 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id a8sm1160332pjs.40.2021.02.08.22.23.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:23:16 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 13/14] dt-bindings: of: Add restricted DMA pool Date: Tue, 9 Feb 2021 14:21:30 +0800 Message-Id: <20210209062131.2300005-14-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce the new compatible string, restricted-dma-pool, for restricted DMA. One can specify the address and length of the restricted DMA memory region by restricted-dma-pool in the reserved-memory node. Signed-off-by: Claire Chang --- .../reserved-memory/reserved-memory.txt | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index e8d3096d922c..fc9a12c2f679 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -51,6 +51,20 @@ compatible (optional) - standard definition used as a shared pool of DMA buffers for a set of devices. It can be used by an operating system to instantiate the necessary pool management subsystem if necessary. + - restricted-dma-pool: This indicates a region of memory meant to be + used as a pool of restricted DMA buffers for a set of devices. The + memory region would be the only region accessible to those devices. + When using this, the no-map and reusable properties must not be set, + so the operating system can create a virtual mapping that will be used + for synchronization. The main purpose for restricted DMA is to + mitigate the lack of DMA access control on systems without an IOMMU, + which could result in the DMA accessing the system memory at + unexpected times and/or unexpected addresses, possibly leading to data + leakage or corruption. The feature on its own provides a basic level + of protection against the DMA overwriting buffer contents at + unexpected times. However, to protect against general data leakage and + system memory corruption, the system needs to provide way to lock down + the memory access, e.g., MPU. - vendor specific string in the form ,[-] no-map (optional) - empty property - Indicates the operating system must not create a virtual mapping @@ -120,6 +134,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). compatible = "acme,multimedia-memory"; reg = <0x77000000 0x4000000>; }; + + restricted_dma_mem_reserved: restricted_dma_mem_reserved { + compatible = "restricted-dma-pool"; + reg = <0x50000000 0x400000>; + }; }; /* ... */ @@ -138,4 +157,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). memory-region = <&multimedia_reserved>; /* ... */ }; + + pcie_device: pcie_device@0,0 { + memory-region = <&restricted_dma_mem_reserved>; + /* ... */ + }; };