From patchwork Tue Mar 12 09:52:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suzuki K Poulose X-Patchwork-Id: 160081 Delivered-To: patch@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp12903538jad; Tue, 12 Mar 2019 02:53:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqwW9M2SN/xbL60RgWSL0ni5bGHRb2fZ4BB09dhp7UYqAq/+K6DB/FW0gn/H+P0MuS0zXmOT X-Received: by 2002:a65:41ca:: with SMTP id b10mr34324191pgq.146.1552384393496; Tue, 12 Mar 2019 02:53:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552384393; cv=none; d=google.com; s=arc-20160816; b=MSoYHVsZSktW9wluyevrmXfjgIQaynHj8BLJjVDjYfRoCKop5hWSLoFGXUcLYU69QB nAk8SoKEd6Kb0onwMhHZ15ennbiqCv0DgmPh5VJLEjpOWTSETu3+7WnCj+22ittrRrGd dafP9CvD2QrGho9K2AvV380+XWaZti9V4Zf01lzM5Xr7A0Lq3iThQ10xtKiBJNlJZ7q0 gnndCHeoYClRhJbc41ccEXZXW14rLiPJyd/Z9AkcYYfrOh6eWWCDN+r8BgH16E/n3WvC 2X8XNjiVefAyIQZcRHwMjnvOme2yyidhcHA+nt+ycPtlK+vmQ+TsYIhcKKG+hBg9F9aM s/mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=V+BhZMLqTOHli00XcLlak8mTSKXbngr5itpVbNz5aLo=; b=AJ/9wrgvo8KoT/L6VQ5KT6ZKg41BYKigM1WqzRmlq+FzBIXkUpKggc8G6MjRYGMrW5 T/UTtkh8RX08Adb5wpwFfIRSPWsfkvHAmLQeJFASohTu82tTt3VIgS/IaXxKi8wQapg+ 1oR1x/qVk/BYRDXYkvcOmw+yYWdTSELX+s8t++yBGo5BtWsMlLbYhtlWfJ1mvsORmBym 54LMe5yaj040G5pPl7sJ1+pLEKk2vdi9MSC68nULk4nXjf7ele8n9nP/raWoJut81Dzm UV/oOSWqhvmg5PLnNTi3ltLsKEWkRdBg06qhaKgbFxW9JLRObUNNDFi/5lbk9Puhr6Yo 09sw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e67si7509898pfh.212.2019.03.12.02.53.13; Tue, 12 Mar 2019 02:53:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726673AbfCLJxM (ORCPT + 31 others); Tue, 12 Mar 2019 05:53:12 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:39664 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725872AbfCLJxL (ORCPT ); Tue, 12 Mar 2019 05:53:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E5478A78; Tue, 12 Mar 2019 02:53:10 -0700 (PDT) Received: from en101.cambridge.arm.com (en101.cambridge.arm.com [10.1.196.93]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 250323F59C; Tue, 12 Mar 2019 02:53:09 -0700 (PDT) From: Suzuki K Poulose To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, james.morse@arm.com, julien.thierry@arm.com, Suzuki K Poulose , Zenghui Yu , Christoffer Dall , Marc Zyngier Subject: [PATCH] kvm: arm: Enforce PTE mappings at stage2 when needed Date: Tue, 12 Mar 2019 09:52:51 +0000 Message-Id: <1552384371-10772-1-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org commit 6794ad5443a2118 ("KVM: arm/arm64: Fix unintended stage 2 PMD mappings") made the checks to skip huge mappings, stricter. However it introduced a bug where we still use huge mappings, ignoring the flag to use PTE mappings, by not reseting the vma_pagesize to PAGE_SIZE. Also, the checks do not cover the PUD huge pages, that was under review during the same period. This patch fixes both the issues. Fixes : 6794ad5443a2118 ("KVM: arm/arm64: Fix unintended stage 2 PMD mappings") Reported-by: Zenghui Yu Cc: Zenghui Yu Cc: Christoffer Dall Cc: Marc Zyngier Signed-off-by: Suzuki K Poulose --- virt/kvm/arm/mmu.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) -- 2.7.4 diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 30251e2..66e0fbb5 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1595,8 +1595,9 @@ static void kvm_send_hwpoison_signal(unsigned long address, send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current); } -static bool fault_supports_stage2_pmd_mappings(struct kvm_memory_slot *memslot, - unsigned long hva) +static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, + unsigned long hva, + unsigned long map_size) { gpa_t gpa_start, gpa_end; hva_t uaddr_start, uaddr_end; @@ -1612,34 +1613,34 @@ static bool fault_supports_stage2_pmd_mappings(struct kvm_memory_slot *memslot, /* * Pages belonging to memslots that don't have the same alignment - * within a PMD for userspace and IPA cannot be mapped with stage-2 - * PMD entries, because we'll end up mapping the wrong pages. + * within a PMD/PUD for userspace and IPA cannot be mapped with stage-2 + * PMD/PUD entries, because we'll end up mapping the wrong pages. * * Consider a layout like the following: * * memslot->userspace_addr: * +-----+--------------------+--------------------+---+ - * |abcde|fgh Stage-1 PMD | Stage-1 PMD tv|xyz| + * |abcde|fgh Stage-1 block | Stage-1 block tv|xyz| * +-----+--------------------+--------------------+---+ * * memslot->base_gfn << PAGE_SIZE: * +---+--------------------+--------------------+-----+ - * |abc|def Stage-2 PMD | Stage-2 PMD |tvxyz| + * |abc|def Stage-2 block | Stage-2 block |tvxyz| * +---+--------------------+--------------------+-----+ * - * If we create those stage-2 PMDs, we'll end up with this incorrect + * If we create those stage-2 blocks, we'll end up with this incorrect * mapping: * d -> f * e -> g * f -> h */ - if ((gpa_start & ~S2_PMD_MASK) != (uaddr_start & ~S2_PMD_MASK)) + if ((gpa_start & (map_size - 1)) != (uaddr_start & (map_size - 1))) return false; /* * Next, let's make sure we're not trying to map anything not covered - * by the memslot. This means we have to prohibit PMD size mappings - * for the beginning and end of a non-PMD aligned and non-PMD sized + * by the memslot. This means we have to prohibit block size mappings + * for the beginning and end of a non-block aligned and non-block sized * memory slot (illustrated by the head and tail parts of the * userspace view above containing pages 'abcde' and 'xyz', * respectively). @@ -1648,8 +1649,8 @@ static bool fault_supports_stage2_pmd_mappings(struct kvm_memory_slot *memslot, * userspace_addr or the base_gfn, as both are equally aligned (per * the check above) and equally sized. */ - return (hva & S2_PMD_MASK) >= uaddr_start && - (hva & S2_PMD_MASK) + S2_PMD_SIZE <= uaddr_end; + return (hva & ~(map_size - 1)) >= uaddr_start && + (hva & ~(map_size - 1)) + map_size <= uaddr_end; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1678,12 +1679,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (!fault_supports_stage2_pmd_mappings(memslot, hva)) - force_pte = true; - - if (logging_active) - force_pte = true; - /* Let's check if we will get back a huge page backed by hugetlbfs */ down_read(¤t->mm->mmap_sem); vma = find_vma_intersection(current->mm, hva, hva + 1); @@ -1694,6 +1689,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } vma_pagesize = vma_kernel_pagesize(vma); + if (logging_active || + !fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) { + force_pte = true; + vma_pagesize = PAGE_SIZE; + } + /* * The stage2 has a minimum of 2 level table (For arm64 see * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can @@ -1701,11 +1702,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * As for PUD huge maps, we must make sure that we have at least * 3 levels, i.e, PMD is not folded. */ - if ((vma_pagesize == PMD_SIZE || - (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) && - !force_pte) { + if (vma_pagesize == PMD_SIZE || + (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT; - } up_read(¤t->mm->mmap_sem); /* We need minimum second+third level pages */