From patchwork Tue Dec 7 07:55:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 521846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4311CC4332F for ; Tue, 7 Dec 2021 07:56:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231753AbhLGH7i (ORCPT ); Tue, 7 Dec 2021 02:59:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231734AbhLGH7g (ORCPT ); Tue, 7 Dec 2021 02:59:36 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3948C061746; Mon, 6 Dec 2021 23:56:06 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id p18-20020a17090ad31200b001a78bb52876so1284553pju.3; Mon, 06 Dec 2021 23:56:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wues2SqUPRv0oUKBbTR72tAwqhwqv49+w2re/gGhWqk=; b=VzFoel0/LWV3DyB2U5cKuuZq56n4dtPVhA17gu/s9cirmMiPeNbbKOHmB8lybRsSAu lTVYf9+JMis4a6vEtmkR/tGT0Ljpa7plWV8c72tQlJ7NB2EePk3FMBj80o0VcopGdXiI yPsKdQqdLSLvf+F3uOaRnCR8bvl/0fvb7Gd3lQ+mqnP8lsouy9XxAInoOEWVk/3YicfF ReTPT0RlOAjPdbS0sSz8fsunpv63jm0aeUdPlJKgcMdMDoY9ELQ3VT+bfqEwuKAHcpym l4UqUHQAeJ307gS28KWcLcrpsldNxHyTcWTLfggDoqEzP8TOPo777l5Ot1HuG6gKdb5v rzRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wues2SqUPRv0oUKBbTR72tAwqhwqv49+w2re/gGhWqk=; b=U+mkbdGC87fz2rKpq0pDw5UJ7u3+RzA3vb98AaB5ZjomxKoKuxkNHS/bII5YTTpB/M d1GISzjiQDjshTxP631+/Nijs2YBp/g4tzRFaTSCeKues9/xy01Jqvqt9VbnGsONa1fz WKolQcoiwN0KVpkgzFxrIFZ2ydFVSsaEENZ1/DrnZOG5yzghGdVHM5hdtuQHbvJLAlJq fbZSP1g6aFNRB+o8S/ZeCWj/m2jcRLtmvVBp5AvfNIBbZJQWs3r11DhdAKEY9TMEtd9F w/pH+15FhDcH6U8Byxt1g4HPIknK+SUXAvEBZ4nx/PXWySsDRF4ehC+Nm+fh0GRD+u8X MqkA== X-Gm-Message-State: AOAM530a208A8yb0a82vd6Nk6IHQmBVG935vVdZanlMlWG/CkAJh3sL/ +uQCDgPCEQcM5tqmls6UYNw= X-Google-Smtp-Source: ABdhPJzN6ZI9xL6N2pZS3EvOqzvOjO+/bRn3T6NyagLoQlAO68ny0t6GkgOC7NfimzQqthS3/5yAew== X-Received: by 2002:a17:90a:e613:: with SMTP id j19mr4812156pjy.182.1638863766484; Mon, 06 Dec 2021 23:56:06 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:0:a463:d753:723:c3a9]) by smtp.gmail.com with ESMTPSA id n15sm1794353pgs.59.2021.12.06.23.56.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 23:56:06 -0800 (PST) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com, arnd@arndb.de, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, michael.h.kelley@microsoft.com Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, brijesh.singh@amd.com, konrad.wilk@oracle.com, hch@lst.de, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V6 1/5] swiotlb: Add swiotlb bounce buffer remap function for HV IVM Date: Tue, 7 Dec 2021 02:55:57 -0500 Message-Id: <20211207075602.2452-2-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211207075602.2452-1-ltykernel@gmail.com> References: <20211207075602.2452-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Tianyu Lan In Isolation VM with AMD SEV, bounce buffer needs to be accessed via extra address space which is above shared_gpa_boundary (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access physical address will be original physical address + shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP spec is called virtual top of memory(vTOM). Memory addresses below vTOM are automatically treated as private while memory above vTOM is treated as shared. Expose swiotlb_unencrypted_base for platforms to set unencrypted memory base offset and platform calls swiotlb_update_mem_attributes() to remap swiotlb mem to unencrypted address space. memremap() can not be called in the early stage and so put remapping code into swiotlb_update_mem_attributes(). Store remap address and use it to copy data from/to swiotlb bounce buffer. Acked-by: Christoph Hellwig Signed-off-by: Tianyu Lan --- Change since v3: * Fix boot up failure on the host with mem_encrypt=on. Move calloing of set_memory_decrypted() back from swiotlb_init_io_tlb_mem to swiotlb_late_init_with_tbl() and rmem_swiotlb_device_init(). Change since v2: * Leave mem->vaddr with phys_to_virt(mem->start) when fail to remap swiotlb memory. Change since v1: * Rework comment in the swiotlb_init_io_tlb_mem() * Make swiotlb_init_io_tlb_mem() back to return void. --- include/linux/swiotlb.h | 6 ++++++ kernel/dma/swiotlb.c | 43 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 47 insertions(+), 2 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 569272871375..f6c3638255d5 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -73,6 +73,9 @@ extern enum swiotlb_force swiotlb_force; * @end: The end address of the swiotlb memory pool. Used to do a quick * range check to see if the memory was in fact allocated by this * API. + * @vaddr: The vaddr of the swiotlb memory pool. The swiotlb memory pool + * may be remapped in the memory encrypted case and store virtual + * address for bounce buffer operation. * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and * @end. For default swiotlb, this is command line adjustable via * setup_io_tlb_npages. @@ -92,6 +95,7 @@ extern enum swiotlb_force swiotlb_force; struct io_tlb_mem { phys_addr_t start; phys_addr_t end; + void *vaddr; unsigned long nslabs; unsigned long used; unsigned int index; @@ -186,4 +190,6 @@ static inline bool is_swiotlb_for_alloc(struct device *dev) } #endif /* CONFIG_DMA_RESTRICTED_POOL */ +extern phys_addr_t swiotlb_unencrypted_base; + #endif /* __LINUX_SWIOTLB_H */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 8e840fbbed7c..34e6ade4f73c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -50,6 +50,7 @@ #include #include +#include #include #include #include @@ -72,6 +73,8 @@ enum swiotlb_force swiotlb_force; struct io_tlb_mem io_tlb_default_mem; +phys_addr_t swiotlb_unencrypted_base; + /* * Max segment that we can provide which (if pages are contingous) will * not be bounced (unless SWIOTLB_FORCE is set). @@ -155,6 +158,27 @@ static inline unsigned long nr_slots(u64 val) return DIV_ROUND_UP(val, IO_TLB_SIZE); } +/* + * Remap swioltb memory in the unencrypted physical address space + * when swiotlb_unencrypted_base is set. (e.g. for Hyper-V AMD SEV-SNP + * Isolation VMs). + */ +void *swiotlb_mem_remap(struct io_tlb_mem *mem, unsigned long bytes) +{ + void *vaddr = NULL; + + if (swiotlb_unencrypted_base) { + phys_addr_t paddr = mem->start + swiotlb_unencrypted_base; + + vaddr = memremap(paddr, bytes, MEMREMAP_WB); + if (!vaddr) + pr_err("Failed to map the unencrypted memory %llx size %lx.\n", + paddr, bytes); + } + + return vaddr; +} + /* * Early SWIOTLB allocation may be too early to allow an architecture to * perform the desired operations. This function allows the architecture to @@ -172,7 +196,12 @@ void __init swiotlb_update_mem_attributes(void) vaddr = phys_to_virt(mem->start); bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); - memset(vaddr, 0, bytes); + + mem->vaddr = swiotlb_mem_remap(mem, bytes); + if (!mem->vaddr) + mem->vaddr = vaddr; + + memset(mem->vaddr, 0, bytes); } static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, @@ -196,7 +225,17 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, mem->slots[i].orig_addr = INVALID_PHYS_ADDR; mem->slots[i].alloc_size = 0; } + + /* + * If swiotlb_unencrypted_base is set, the bounce buffer memory will + * be remapped and cleared in swiotlb_update_mem_attributes. + */ + if (swiotlb_unencrypted_base) + return; + memset(vaddr, 0, bytes); + mem->vaddr = vaddr; + return; } int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) @@ -371,7 +410,7 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size phys_addr_t orig_addr = mem->slots[index].orig_addr; size_t alloc_size = mem->slots[index].alloc_size; unsigned long pfn = PFN_DOWN(orig_addr); - unsigned char *vaddr = phys_to_virt(tlb_addr); + unsigned char *vaddr = mem->vaddr + tlb_addr - mem->start; unsigned int tlb_offset, orig_addr_offset; if (orig_addr == INVALID_PHYS_ADDR) From patchwork Tue Dec 7 07:55:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 521844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC7A7C433FE for ; Tue, 7 Dec 2021 07:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231779AbhLGH7k (ORCPT ); Tue, 7 Dec 2021 02:59:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231334AbhLGH7i (ORCPT ); Tue, 7 Dec 2021 02:59:38 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3401C061746; Mon, 6 Dec 2021 23:56:07 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id y7so8899505plp.0; Mon, 06 Dec 2021 23:56:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/5M9gOfMUisREUkN0oMDl3d5AgAs/rybQfOBV8c7GG0=; b=iqMBIlwZ0cRMihL7ebFJzQtnX9gzY8f/Gm2aKiyj/dGxhCyKv5GRhzBpl9NhhQf5Ki YauW6iCaVM+V7sWtMpLOPLmOPjg9/BIMvQ0b1wuOkR8TcY3WEujZIqbGpF5f3cw6/06b 69V2kLpNyPwzybu2vn5o9cRG5Uz6RHC0HwESSGM/r1HIN1AhaPbIGU47hRZ5nwiWWg1e gQkfLpRIm3z+OYUAqzGEkm1vChWfc7flJlcj2dBpJlByKd9vDfOLtQflSeYO/abZs5WR MCL34gCWxwnyzqBxdthcCJ4C+T8SL3YtulbJgNv6i00TgK+w0a6w+YEtDPbvfTlzg2y3 wbKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/5M9gOfMUisREUkN0oMDl3d5AgAs/rybQfOBV8c7GG0=; b=aKDdf2NmaG+iEfqzzSzcWySHal3HzeEM98DUSEttMBSphfMtSf7I5QdnjHt83bkP8V OAjM9LDWrHoziyVlt+AAVKFcNetGnprkSiTd5OfV1zX6la+Gn4JFEypG2/+ajOCxUVj5 ylAk5Vu6tMcFGDQH35zoAf96URKbMMuU8BURDJ6chodN+iPUYnC352lnt862MI9wNcbl 1LDcZIUmEoN4kvCTddBbYowqAy+3kyOP/hdtYWc1GCF5G0BMlUq5fk6NPbRIihAZARaW bQKkm7fVVnuG0bZWbzUyCU9S2WQmzqQQpqrqW1RfyzrulRvgpE7KO9jO0Hgsevh0kq+W aJ6A== X-Gm-Message-State: AOAM533veuVn/BJmtuR3MPFvPkh4mkHwJKVmXab3YyPQa8FxUx/ktlOu K7x9v/KcShngVSpWv0XGcCs= X-Google-Smtp-Source: ABdhPJzmSQygtcV32AiUSqniemv899Kpepc9kNHnUv4WZ8zdPUOplc9hqggQY5r9lcsEO2Agyr2q0Q== X-Received: by 2002:a17:90a:6a82:: with SMTP id u2mr4795405pjj.105.1638863767382; Mon, 06 Dec 2021 23:56:07 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:0:a463:d753:723:c3a9]) by smtp.gmail.com with ESMTPSA id n15sm1794353pgs.59.2021.12.06.23.56.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 23:56:07 -0800 (PST) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com, arnd@arndb.de, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, michael.h.kelley@microsoft.com Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, brijesh.singh@amd.com, konrad.wilk@oracle.com, hch@lst.de, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V6 2/5] x86/hyper-v: Add hyperv Isolation VM check in the cc_platform_has() Date: Tue, 7 Dec 2021 02:55:58 -0500 Message-Id: <20211207075602.2452-3-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211207075602.2452-1-ltykernel@gmail.com> References: <20211207075602.2452-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Tianyu Lan Hyper-V provides Isolation VM which has memory encrypt support. Add hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT attribute. Signed-off-by: Tianyu Lan --- Change since v3: * Change code style of checking GUEST_MEM attribute in the hyperv_cc_platform_has(). --- arch/x86/kernel/cc_platform.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c index 03bb2f343ddb..47db88c275d5 100644 --- a/arch/x86/kernel/cc_platform.c +++ b/arch/x86/kernel/cc_platform.c @@ -11,6 +11,7 @@ #include #include +#include #include static bool __maybe_unused intel_cc_platform_has(enum cc_attr attr) @@ -58,9 +59,16 @@ static bool amd_cc_platform_has(enum cc_attr attr) #endif } +static bool hyperv_cc_platform_has(enum cc_attr attr) +{ + return attr == CC_ATTR_GUEST_MEM_ENCRYPT; +} bool cc_platform_has(enum cc_attr attr) { + if (hv_is_isolation_supported()) + return hyperv_cc_platform_has(attr); + if (sme_me_mask) return amd_cc_platform_has(attr); From patchwork Tue Dec 7 07:56:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 521845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1165C433FE for ; Tue, 7 Dec 2021 07:56:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231825AbhLGH7o (ORCPT ); Tue, 7 Dec 2021 02:59:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231772AbhLGH7k (ORCPT ); Tue, 7 Dec 2021 02:59:40 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6AF3C061748; Mon, 6 Dec 2021 23:56:09 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id k64so4484464pfd.11; Mon, 06 Dec 2021 23:56:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yy/2OZY9J/wYavLFTDE3NlPfcML7wexzr9ziGzbYE+w=; b=CUhMTJAWyO3LCe7i316onVsMTptmtZ+E2woClfTkHJ1vxK2TkotTOuJq019eKZngMD CcbrahAoiSrqbMqM2I/32qSDPEK/z6bEXXIi+MGpKcfZOV7e9eTDXVp2jglukjTSJw78 /9lbZ8e2HDyOuASpRbmKOlHIbyY5OEZNrbNGTI/FO0ukVbLHXdZiS7gjDE0CU17KrJWW 8yZePopt8WD1JQpcGHQozgzWFRV9Hru9Q5vPYSA6j6RldcPDMFBox+ITIXuBcCQeraca DIgohV+kuz3xWM79X4ncDmJpad+s1r83OhveG1hzPYXG0VGAFEPFJ7gCW0iPe017lwNT jqpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yy/2OZY9J/wYavLFTDE3NlPfcML7wexzr9ziGzbYE+w=; b=vhJlCbP+w9NctJGzI8DR9OTEHBL8LpvUrwjIzSWX+iES6BkZIYeii7RPOiO+QntSWu hkG1rYUuWxQrd+0Sz78JnZr4oJWALS4fMOmN6QB315a6NrYUgy8cCoBrUq01Crnt7K88 YUnhe3CNa2wgAuyM1eZ+Md7v2v4r96L5eteUDfp2/JZlj4IT4b6V/HyDcxlZXEgTQraU k97y92hhYFAOZ9sCCD2YfIp62grzBRqHSaSurGj1HIyYys44Ja3l7Lzx7VMyCTTu9lup fq9G+CcbOBDTHnMlQWACu9Q6ImWnzDv6dLO3m7o5IhjmiQ9CBZ4rnF8HZr+qGcPxwnG3 bg6A== X-Gm-Message-State: AOAM533ABFFJybxNClJO5nr/XLjFPPf3WznFFpp+S4vIayVQ5g9Zn6D0 15bFPLIsmauuXuwL7cAg4rQ= X-Google-Smtp-Source: ABdhPJwCorL443r9kqn/w4SDe0PG09y/GhEmaQR8Z5pRQZoVgG9coduVxiGXcpZuCTEM9N+O1182aw== X-Received: by 2002:aa7:84d6:0:b0:49f:a996:b714 with SMTP id x22-20020aa784d6000000b0049fa996b714mr41592976pfn.10.1638863769211; Mon, 06 Dec 2021 23:56:09 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:0:a463:d753:723:c3a9]) by smtp.gmail.com with ESMTPSA id n15sm1794353pgs.59.2021.12.06.23.56.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 23:56:08 -0800 (PST) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com, arnd@arndb.de, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, michael.h.kelley@microsoft.com Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, brijesh.singh@amd.com, konrad.wilk@oracle.com, hch@lst.de, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V6 4/5] scsi: storvsc: Add Isolation VM support for storvsc driver Date: Tue, 7 Dec 2021 02:56:00 -0500 Message-Id: <20211207075602.2452-5-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211207075602.2452-1-ltykernel@gmail.com> References: <20211207075602.2452-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Tianyu Lan In Isolation VM, all shared memory with host needs to mark visible to host via hvcall. vmbus_establish_gpadl() has already done it for storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap) to map these memory during sending/receiving packet and return swiotlb bounce buffer dma address. In Isolation VM, swiotlb bounce buffer is marked to be visible to host and the swiotlb force mode is enabled. Set device's dma min align mask to HV_HYP_PAGE_SIZE - 1 in order to keep the original data offset in the bounce buffer. Signed-off-by: Tianyu Lan --- drivers/hv/vmbus_drv.c | 4 ++++ drivers/scsi/storvsc_drv.c | 37 +++++++++++++++++++++---------------- include/linux/hyperv.h | 1 + 3 files changed, 26 insertions(+), 16 deletions(-) diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index 392c1ac4f819..ae6ec503399a 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include "hyperv_vmbus.h" @@ -2078,6 +2079,7 @@ struct hv_device *vmbus_device_create(const guid_t *type, return child_device_obj; } +static u64 vmbus_dma_mask = DMA_BIT_MASK(64); /* * vmbus_device_register - Register the child device */ @@ -2118,6 +2120,8 @@ int vmbus_device_register(struct hv_device *child_device_obj) } hv_debug_add_dev_dir(child_device_obj); + child_device_obj->device.dma_mask = &vmbus_dma_mask; + child_device_obj->device.dma_parms = &child_device_obj->dma_parms; return 0; err_kset_unregister: diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index 20595c0ba0ae..ae293600d799 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -21,6 +21,8 @@ #include #include #include +#include + #include #include #include @@ -1336,6 +1338,7 @@ static void storvsc_on_channel_callback(void *context) continue; } request = (struct storvsc_cmd_request *)scsi_cmd_priv(scmnd); + scsi_dma_unmap(scmnd); } storvsc_on_receive(stor_device, packet, request); @@ -1749,7 +1752,6 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) struct hv_host_device *host_dev = shost_priv(host); struct hv_device *dev = host_dev->dev; struct storvsc_cmd_request *cmd_request = scsi_cmd_priv(scmnd); - int i; struct scatterlist *sgl; unsigned int sg_count; struct vmscsi_request *vm_srb; @@ -1831,10 +1833,11 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) payload_sz = sizeof(cmd_request->mpb); if (sg_count) { - unsigned int hvpgoff, hvpfns_to_add; unsigned long offset_in_hvpg = offset_in_hvpage(sgl->offset); unsigned int hvpg_count = HVPFN_UP(offset_in_hvpg + length); - u64 hvpfn; + struct scatterlist *sg; + unsigned long hvpfn, hvpfns_to_add; + int j, i = 0; if (hvpg_count > MAX_PAGE_BUFFER_COUNT) { @@ -1848,21 +1851,22 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) payload->range.len = length; payload->range.offset = offset_in_hvpg; + sg_count = scsi_dma_map(scmnd); + if (sg_count < 0) + return SCSI_MLQUEUE_DEVICE_BUSY; - for (i = 0; sgl != NULL; sgl = sg_next(sgl)) { + for_each_sg(sgl, sg, sg_count, j) { /* - * Init values for the current sgl entry. hvpgoff - * and hvpfns_to_add are in units of Hyper-V size - * pages. Handling the PAGE_SIZE != HV_HYP_PAGE_SIZE - * case also handles values of sgl->offset that are - * larger than PAGE_SIZE. Such offsets are handled - * even on other than the first sgl entry, provided - * they are a multiple of PAGE_SIZE. + * Init values for the current sgl entry. hvpfns_to_add + * is in units of Hyper-V size pages. Handling the + * PAGE_SIZE != HV_HYP_PAGE_SIZE case also handles + * values of sgl->offset that are larger than PAGE_SIZE. + * Such offsets are handled even on other than the first + * sgl entry, provided they are a multiple of PAGE_SIZE. */ - hvpgoff = HVPFN_DOWN(sgl->offset); - hvpfn = page_to_hvpfn(sg_page(sgl)) + hvpgoff; - hvpfns_to_add = HVPFN_UP(sgl->offset + sgl->length) - - hvpgoff; + hvpfn = HVPFN_DOWN(sg_dma_address(sg)); + hvpfns_to_add = HVPFN_UP(sg_dma_address(sg) + + sg_dma_len(sg)) - hvpfn; /* * Fill the next portion of the PFN array with @@ -1872,7 +1876,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) * the PFN array is filled. */ while (hvpfns_to_add--) - payload->range.pfn_array[i++] = hvpfn++; + payload->range.pfn_array[i++] = hvpfn++; } } @@ -2016,6 +2020,7 @@ static int storvsc_probe(struct hv_device *device, stor_device->vmscsi_size_delta = sizeof(struct vmscsi_win8_extension); spin_lock_init(&stor_device->lock); hv_set_drvdata(device, stor_device); + dma_set_min_align_mask(&device->device, HV_HYP_PAGE_SIZE - 1); stor_device->port_number = host->host_no; ret = storvsc_connect_to_vsp(device, storvsc_ringbuffer_size, is_fc); diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 1f037e114dc8..74f5e92f91a0 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -1261,6 +1261,7 @@ struct hv_device { struct vmbus_channel *channel; struct kset *channels_kset; + struct device_dma_parameters dma_parms; /* place holder to keep track of the dir for hv device in debugfs */ struct dentry *debug_dir;