From patchwork Fri Aug 27 17:21:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 503991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17746C19F37 for ; Fri, 27 Aug 2021 17:22:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ECE0260E0B for ; Fri, 27 Aug 2021 17:22:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242169AbhH0RW7 (ORCPT ); Fri, 27 Aug 2021 13:22:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240866AbhH0RWd (ORCPT ); Fri, 27 Aug 2021 13:22:33 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A88D1C061757; Fri, 27 Aug 2021 10:21:44 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id ot2-20020a17090b3b4200b0019127f8ed87so8314850pjb.1; Fri, 27 Aug 2021 10:21:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QzDpud8qjO7DGAwSDqVcLhrCIaPCA+FlLB1tYtC1F1E=; b=lzOMpy7gjTg0QsI9X1B0zDEO5w8A3tTHDaA3Newvm7Dl3CUxrum/hXz2a9YiW/zbl9 DWBNfsfAuWLVXW+6052ehJsSra7WuCaqDcfvuVRm/H1XSNDcmZV21gls+k03o8B15EvO WllzL/XxkMvOhuAI1ingj47d+lIZ1gqKkC9D8tlfwvTv21qcSq6y1ob7HKk5txM1NLLs ZHs1WkgWMZEeRaubjxDUO3RgjimMTGd3vjBVskF/18NptrHuYNJudZP4Ttd3US16Hi3t /3kG52lIvLDBHlCYO8E/XcfFGifWx7bIZTkC4gv5hSXuX9FWgCaJwm8f8Mjy2h7PQgiL HXmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QzDpud8qjO7DGAwSDqVcLhrCIaPCA+FlLB1tYtC1F1E=; b=mNkIFhpcBBbHI1hmq6XOoe/95wDEMgdZpK8VXnFZ+BEHClwBshy/9u2ZB3dhyiqbw+ uHwZiwFOI9WTE7hLxRwa4Fvqxib0A4MfHJg20x0vY196FfvCs4HwRLpl7Cr+GJSTaFEI rzjxFU+IHkKyoIA8PQz8tu9T1+shXvV5aRNrZzbR98ARD+8nxgqeioc98rIC4stmSe61 MxeyR9jI3Y4vXx14Oo2Pzs5hL4nZ14oDq0lRmwS56y0OEtPcZVxlWvuD7y3uZJVYYLE9 L3uz/bi68xJF8UYgtGpK1MdVA0TpsOJcXo0d40ZkkHS/6/WJMgMEnCy3PY2XGKbcOZjC 5oDg== X-Gm-Message-State: AOAM533VxAcHN9QimyjYdux1Wy9UMG2cND5Nwx9YKrthLYYPazqs02jc MM5SyI7uA0hpiZBXeKFlWIM= X-Google-Smtp-Source: ABdhPJyGYNb/dCSEWaVblCRuQOF/Z6LafmEQvufqDOpzfIgc7L3h2vFfPqAmiAB3tW+rcK6DfA1aiQ== X-Received: by 2002:a17:902:bf49:b0:136:7033:8963 with SMTP id u9-20020a170902bf4900b0013670338963mr9726914pls.75.1630084904216; Fri, 27 Aug 2021 10:21:44 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:ef50:8fcd:44d1:eb17]) by smtp.gmail.com with ESMTPSA id f5sm7155015pjo.23.2021.08.27.10.21.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Aug 2021 10:21:43 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, catalin.marinas@arm.com, will@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, joro@8bytes.org, davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com, gregkh@linuxfoundation.org, arnd@arndb.de, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, brijesh.singh@amd.com, thomas.lendacky@amd.com, Tianyu.Lan@microsoft.com, pgonda@google.com, martin.b.radev@gmail.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org, aneesh.kumar@linux.ibm.com, krish.sadhukhan@oracle.com, saravanand@fb.com, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, rientjes@google.com, ardb@kernel.org, michael.h.kelley@microsoft.com Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V4 09/13] DMA: Add dma_map_decrypted/dma_unmap_encrypted() function Date: Fri, 27 Aug 2021 13:21:07 -0400 Message-Id: <20210827172114.414281-10-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210827172114.414281-1-ltykernel@gmail.com> References: <20210827172114.414281-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan In Hyper-V Isolation VM with AMD SEV, swiotlb boucne buffer needs to be mapped into address space above vTOM and so introduce dma_map_decrypted/dma_unmap_encrypted() to map/unmap bounce buffer memory. The platform can populate man/unmap callback in the dma memory decrypted ops. The swiotlb bounce buffer PA will be returned to driver and used for DMA address. The new mapped virtual address is just to acess bounce buffer in the swiotlb code. PAs passed to DMA API still have backing struct page. Signed-off-by: Tianyu Lan --- include/linux/dma-map-ops.h | 9 +++++++++ kernel/dma/mapping.c | 22 ++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 0d53a96a3d64..01d60a024e45 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -71,6 +71,11 @@ struct dma_map_ops { unsigned long (*get_merge_boundary)(struct device *dev); }; +struct dma_memory_decrypted_ops { + void *(*map)(void *addr, unsigned long size); + void (*unmap)(void *addr); +}; + #ifdef CONFIG_DMA_OPS #include @@ -374,6 +379,10 @@ static inline void debug_dma_dump_mappings(struct device *dev) } #endif /* CONFIG_DMA_API_DEBUG */ +void *dma_map_decrypted(void *addr, unsigned long size); +int dma_unmap_decrypted(void *addr, unsigned long size); + extern const struct dma_map_ops dma_dummy_ops; +extern struct dma_memory_decrypted_ops dma_memory_generic_decrypted_ops; #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 2b06a809d0b9..6fb150dc1750 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -13,11 +13,13 @@ #include #include #include +#include #include "debug.h" #include "direct.h" bool dma_default_coherent; +struct dma_memory_decrypted_ops dma_memory_generic_decrypted_ops; /* * Managed DMA API */ @@ -736,3 +738,23 @@ unsigned long dma_get_merge_boundary(struct device *dev) return ops->get_merge_boundary(dev); } EXPORT_SYMBOL_GPL(dma_get_merge_boundary); + +void *dma_map_decrypted(void *addr, unsigned long size) +{ + if (set_memory_decrypted((unsigned long)addr, + size / PAGE_SIZE)) + return NULL; + + if (dma_memory_generic_decrypted_ops.map) + return dma_memory_generic_decrypted_ops.map(addr, size); + else + return addr; +} + +int dma_unmap_encrypted(void *addr, unsigned long size) +{ + if (dma_memory_generic_decrypted_ops.unmap) + dma_memory_generic_decrypted_ops.unmap(addr); + + return set_memory_encrypted((unsigned long)addr, size / PAGE_SIZE); +}