From patchwork Thu Sep 25 19:42:53 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 37933 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 05DC5202DB for ; Thu, 25 Sep 2014 19:44:41 +0000 (UTC) Received: by mail-la0-f70.google.com with SMTP id s18sf6777941lam.1 for ; Thu, 25 Sep 2014 12:44:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=iZROJAEwopirCxR94VhhEjlsQQglmp6ZxWSsKt3p1r8=; b=OEaV7BRzLzUnDROV65okmOkZQAkZIWgu1GSu4RrwArNWgnEDuy6uZWibTpmaGvbgdP HGpyDz1nUlYDUsnUWwafJCotnRFj4gwD/ShAN17RJ13/i3HgyGaELcWC9fndhyT1+zmS ZNHeiWg8r9AkIyiyvqKpovbblP7NzpgUd5FIEbZvSobQBEjzfdJz4iDX6EJ3xn3r6RbT uJ0Pfd2azbWwYBE8ngUBCsRbHSQ7F2UdhvvZpaBLKZMKZAdbS6tGfto8r6N+5v/gAaei lFse/Q/sEAMfcsPErminqCrsEY3yyyaSnooUSwKZYev9InwS5z/A/MXhJdVnPDsjdQzf Uj7g== X-Gm-Message-State: ALoCoQkQcBkzO9fDxH7Q99NMUXF1/fwugn/NKczGWzue1lBYEQZWSMXjdEwX9FTpCQIMcfGSe9Cu X-Received: by 10.180.24.72 with SMTP id s8mr2477wif.1.1411674280800; Thu, 25 Sep 2014 12:44:40 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.44.229 with SMTP id h5ls327572lam.4.gmail; Thu, 25 Sep 2014 12:44:40 -0700 (PDT) X-Received: by 10.112.149.36 with SMTP id tx4mr14829244lbb.79.1411674280511; Thu, 25 Sep 2014 12:44:40 -0700 (PDT) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com [209.85.215.53]) by mx.google.com with ESMTPS id zk2si4434938lbb.51.2014.09.25.12.44.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 25 Sep 2014 12:44:40 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by mail-la0-f53.google.com with SMTP id ty20so732351lab.26 for ; Thu, 25 Sep 2014 12:44:40 -0700 (PDT) X-Received: by 10.112.75.233 with SMTP id f9mr5053361lbw.102.1411674280414; Thu, 25 Sep 2014 12:44:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp814038lbb; Thu, 25 Sep 2014 12:44:39 -0700 (PDT) X-Received: by 10.66.155.105 with SMTP id vv9mr22175273pab.61.1411674278801; Thu, 25 Sep 2014 12:44:38 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id ak10si5412140pbd.169.2014.09.25.12.44.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Sep 2014 12:44:38 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XXEwP-0003pe-3a; Thu, 25 Sep 2014 19:43:01 +0000 Received: from mail-la0-f53.google.com ([209.85.215.53]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XXEwG-0003ja-ET for linux-arm-kernel@lists.infradead.org; Thu, 25 Sep 2014 19:42:54 +0000 Received: by mail-la0-f53.google.com with SMTP id ty20so751895lab.12 for ; Thu, 25 Sep 2014 12:42:29 -0700 (PDT) X-Received: by 10.112.209.70 with SMTP id mk6mr15139521lbc.44.1411674149624; Thu, 25 Sep 2014 12:42:29 -0700 (PDT) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id li3sm1107559lab.25.2014.09.25.12.42.27 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 25 Sep 2014 12:42:28 -0700 (PDT) From: Christoffer Dall To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH 1/2] arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2 Date: Thu, 25 Sep 2014 21:42:53 +0200 Message-Id: <1411674174-30672-2-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1411674174-30672-1-git-send-email-christoffer.dall@linaro.org> References: <1411674174-30672-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140925_124252_890585_996D8F08 X-CRM114-Status: GOOD ( 25.37 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.53 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.215.53 listed in wl.mailspike.net] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Marc Zyngier , Catalin Marinas , Jungseok Lee , Christoffer Dall , kvm@vger.kernel.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This patch adds the necessary support for all host kernel PGSIZE and VA_SPACE configuration options for both EL2 and the Stage-2 page tables. However, for 40bit and 42bit PARange systems, the architecture mandates that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2 pagge tables than levels of host kernel page tables. At the same time, systems with a PARange > 42bit, we limit the IPA range by always setting VTCR_EL2.T0SZ to 24. To solve the situation with different levels of page tables for Stage-2 translation than the host kernel page tables, we allocate a dummy PGD with pointers to our actual inital level Stage-2 page table, in order for us to reuse the kernel pgtable manipulation primitives. Reproducing all these in KVM does not look pretty and unnecessarily complicates the 32-bit side. Systems with a PARange < 40bits are not yet supported. [ I have reworked this patch from its original form submitted by Jungseok to take the architecture constraints into consideration. There were too many changes from the original patch for me to preserve the authorship. Thanks to Catalin Marinas for his help in figuring out a good solution to this challenge. I have also fixed various bugs and missing error code handling from the original patch. - Christoffer ] Cc: Marc Zyngier Cc: Catalin Marinas Signed-off-by: Jungseok Lee Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_mmu.h | 23 +++++++ arch/arm/kvm/arm.c | 2 +- arch/arm/kvm/mmu.c | 111 +++++++++++++++++++++++++------- arch/arm64/Kconfig | 1 - arch/arm64/include/asm/kvm_mmu.h | 136 ++++++++++++++++++++++++++++++++++++--- 5 files changed, 239 insertions(+), 34 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 3f688b4..0ffd2a8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -37,6 +37,11 @@ */ #define TRAMPOLINE_VA UL(CONFIG_VECTORS_BASE) +/* + * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels. + */ +#define KVM_MMU_CACHE_MIN_PAGES 2 + #ifndef __ASSEMBLY__ #include @@ -83,6 +88,11 @@ static inline void kvm_clean_pgd(pgd_t *pgd) clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t)); } +static inline void kvm_clean_pmd(pmd_t *pmd) +{ + clean_dcache_area(pmd, PTRS_PER_PMD * sizeof(pmd_t)); +} + static inline void kvm_clean_pmd_entry(pmd_t *pmd) { clean_pmd_entry(pmd); @@ -127,6 +137,19 @@ static inline bool kvm_page_empty(void *ptr) #define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp) #define kvm_pud_table_empty(pudp) (0) +#define KVM_PREALLOC_LEVELS 0 + +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) +{ + return 0; +} + +static inline void kvm_free_hwpgd(struct kvm *kvm) { } + +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm) +{ + return virt_to_phys(kvm->arch.pgd); +} struct kvm; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 7796051..048f37f 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -409,7 +409,7 @@ static void update_vttbr(struct kvm *kvm) kvm_next_vmid++; /* update vttbr to be used with the new vmid */ - pgd_phys = virt_to_phys(kvm->arch.pgd); + pgd_phys = kvm_get_hwpgd(kvm); BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK); vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK; kvm->arch.vttbr = pgd_phys | vmid; diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index bb06f76..4532f5f 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -33,6 +33,7 @@ extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; +static int kvm_prealloc_levels; static pgd_t *boot_hyp_pgd; static pgd_t *hyp_pgd; static DEFINE_MUTEX(kvm_hyp_pgd_mutex); @@ -42,7 +43,7 @@ static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; static phys_addr_t hyp_idmap_vector; -#define pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t)) +#define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t)) #define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x)) @@ -158,7 +159,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud, } } while (pmd++, addr = next, addr != end); - if (kvm_pmd_table_empty(start_pmd)) + if (kvm_pmd_table_empty(start_pmd) && (!kvm || kvm_prealloc_levels < 2)) clear_pud_entry(kvm, pud, start_addr); } @@ -182,7 +183,7 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd, } } while (pud++, addr = next, addr != end); - if (kvm_pud_table_empty(start_pud)) + if (kvm_pud_table_empty(start_pud) && (!kvm || kvm_prealloc_levels < 1)) clear_pgd_entry(kvm, pgd, start_addr); } @@ -306,7 +307,7 @@ void free_boot_hyp_pgd(void) if (boot_hyp_pgd) { unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE); unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - free_pages((unsigned long)boot_hyp_pgd, pgd_order); + free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order); boot_hyp_pgd = NULL; } @@ -343,7 +344,7 @@ void free_hyp_pgds(void) for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE) unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); - free_pages((unsigned long)hyp_pgd, pgd_order); + free_pages((unsigned long)hyp_pgd, hyp_pgd_order); hyp_pgd = NULL; } @@ -401,13 +402,46 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start, return 0; } +static int create_hyp_pud_mappings(pgd_t *pgd, unsigned long start, + unsigned long end, unsigned long pfn, + pgprot_t prot) +{ + pud_t *pud; + pmd_t *pmd; + unsigned long addr, next; + int ret; + + addr = start; + do { + pud = pud_offset(pgd, addr); + + if (pud_none_or_clear_bad(pud)) { + pmd = pmd_alloc_one(NULL, addr); + if (!pmd) { + kvm_err("Cannot allocate Hyp pmd\n"); + return -ENOMEM; + } + pud_populate(NULL, pud, pmd); + get_page(virt_to_page(pud)); + kvm_flush_dcache_to_poc(pud, sizeof(*pud)); + } + + next = pud_addr_end(addr, end); + ret = create_hyp_pmd_mappings(pud, addr, next, pfn, prot); + if (ret) + return ret; + pfn += (next - addr) >> PAGE_SHIFT; + } while (addr = next, addr != end); + + return 0; +} + static int __create_hyp_mappings(pgd_t *pgdp, unsigned long start, unsigned long end, unsigned long pfn, pgprot_t prot) { pgd_t *pgd; pud_t *pud; - pmd_t *pmd; unsigned long addr, next; int err = 0; @@ -416,22 +450,21 @@ static int __create_hyp_mappings(pgd_t *pgdp, end = PAGE_ALIGN(end); do { pgd = pgdp + pgd_index(addr); - pud = pud_offset(pgd, addr); - if (pud_none_or_clear_bad(pud)) { - pmd = pmd_alloc_one(NULL, addr); - if (!pmd) { - kvm_err("Cannot allocate Hyp pmd\n"); + if (pgd_none(*pgd)) { + pud = pud_alloc_one(NULL, addr); + if (!pud) { + kvm_err("Cannot allocate Hyp pud\n"); err = -ENOMEM; goto out; } - pud_populate(NULL, pud, pmd); - get_page(virt_to_page(pud)); - kvm_flush_dcache_to_poc(pud, sizeof(*pud)); + pgd_populate(NULL, pgd, pud); + get_page(virt_to_page(pgd)); + kvm_flush_dcache_to_poc(pgd, sizeof(*pgd)); } next = pgd_addr_end(addr, end); - err = create_hyp_pmd_mappings(pud, addr, next, pfn, prot); + err = create_hyp_pud_mappings(pgd, addr, next, pfn, prot); if (err) goto out; pfn += (next - addr) >> PAGE_SHIFT; @@ -521,6 +554,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr) */ int kvm_alloc_stage2_pgd(struct kvm *kvm) { + int ret; pgd_t *pgd; if (kvm->arch.pgd != NULL) { @@ -533,9 +567,17 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm) return -ENOMEM; memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t)); + + ret = kvm_prealloc_hwpgd(kvm, pgd); + if (ret) + goto out; + kvm_clean_pgd(pgd); kvm->arch.pgd = pgd; - + ret = 0; +out: + if (ret) + free_pages((unsigned long)pgd, S2_PGD_ORDER); return 0; } @@ -572,19 +614,36 @@ void kvm_free_stage2_pgd(struct kvm *kvm) return; unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); + kvm_free_hwpgd(kvm); free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER); kvm->arch.pgd = NULL; } -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, phys_addr_t addr) { pgd_t *pgd; pud_t *pud; - pmd_t *pmd; pgd = kvm->arch.pgd + pgd_index(addr); - pud = pud_offset(pgd, addr); + if (pgd_none(*pgd)) { + if (!cache) + return NULL; + pud = mmu_memory_cache_alloc(cache); + pgd_populate(NULL, pgd, pud); + get_page(virt_to_page(pgd)); + } + + return pud_offset(pgd, addr); +} + +static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + phys_addr_t addr) +{ + pud_t *pud; + pmd_t *pmd; + + pud = stage2_get_pud(kvm, cache, addr); if (pud_none(*pud)) { if (!cache) return NULL; @@ -630,7 +689,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, pmd_t *pmd; pte_t *pte, old_pte; - /* Create stage-2 page table mapping - Level 1 */ + /* Create stage-2 page table mapping - Levels 0 and 1 */ pmd = stage2_get_pmd(kvm, cache, addr); if (!pmd) { /* @@ -688,7 +747,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) { pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE); - ret = mmu_topup_memory_cache(&cache, 2, 2); + ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES, + KVM_MMU_CACHE_MIN_PAGES); if (ret) goto out; spin_lock(&kvm->mmu_lock); @@ -797,7 +857,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, up_read(¤t->mm->mmap_sem); /* We need minimum second+third level pages */ - ret = mmu_topup_memory_cache(memcache, 2, KVM_NR_MEM_OBJS); + ret = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES, + KVM_NR_MEM_OBJS); if (ret) return ret; @@ -1070,8 +1131,8 @@ int kvm_mmu_init(void) (unsigned long)phys_base); } - hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order); - boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order); + hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); + boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); if (!hyp_pgd || !boot_hyp_pgd) { kvm_err("Hyp mode PGD not allocated\n"); @@ -1113,6 +1174,8 @@ int kvm_mmu_init(void) goto out; } + kvm_prealloc_levels = KVM_PREALLOC_LEVELS; + return 0; out: free_hyp_pgds(); diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index fd4e81a..0f3e0a9 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -200,7 +200,6 @@ config ARM64_VA_BITS_42 config ARM64_VA_BITS_48 bool "48-bit" - depends on BROKEN endchoice diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a030d16..313c6f9 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -41,6 +41,18 @@ */ #define TRAMPOLINE_VA (HYP_PAGE_OFFSET_MASK & PAGE_MASK) +/* + * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation + * levels in addition to the PGD and potentially the PUD which are + * pre-allocated (we pre-allocate the fake PGD and the PUD when the Stage-2 + * tables use one level of tables less than the kernel. + */ +#ifdef CONFIG_ARM64_64K_PAGES +#define KVM_MMU_CACHE_MIN_PAGES 1 +#else +#define KVM_MMU_CACHE_MIN_PAGES 2 +#endif + #ifdef __ASSEMBLY__ /* @@ -53,6 +65,7 @@ #else +#include #include #include @@ -65,10 +78,6 @@ #define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT) #define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL) -/* Make sure we get the right size, and thus the right alignment */ -#define PTRS_PER_S2_PGD (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT)) -#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) - int create_hyp_mappings(void *from, void *to); int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); @@ -93,6 +102,7 @@ void kvm_clear_hyp_idmap(void); #define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd) static inline void kvm_clean_pgd(pgd_t *pgd) {} +static inline void kvm_clean_pmd(pmd_t *pmd) {} static inline void kvm_clean_pmd_entry(pmd_t *pmd) {} static inline void kvm_clean_pte(pte_t *pte) {} static inline void kvm_clean_pte_entry(pte_t *pte) {} @@ -118,13 +128,123 @@ static inline bool kvm_page_empty(void *ptr) } #define kvm_pte_table_empty(ptep) kvm_page_empty(ptep) -#ifndef CONFIG_ARM64_64K_PAGES -#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp) -#else +#if CONFIG_ARM64_PGTABLE_LEVELS == 2 #define kvm_pmd_table_empty(pmdp) (0) -#endif #define kvm_pud_table_empty(pudp) (0) +#elif CONFIG_ARM64_PGTABLE_LEVELS == 3 +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp) +#define kvm_pud_table_empty(pudp) (0) +#elif CONFIG_ARM64_PGTABLE_LEVELS == 4 +#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp) +#define kvm_pud_table_empty(pudp) kvm_page_empty(pudp) +#endif + +/** + * kvm_prealloc_hwpgd - allocate inital table for VTTBR + * @kvm: The KVM struct pointer for the VM. + * @pgd: The kernel pseudo pgd + * + * When the kernel uses more levels of page tables than the guest, we allocate + * a fake PGD and pre-populate it to point to the next-level page table, which + * will be the real initial page table pointed to by the VTTBR. + * + * When KVM_PREALLOC_PMD is defined, we allocate a single page for the PMD and + * the kernel will use folded pud. When KVM_PREALLOC_PUD is defined, we + * allocate 2 consecutive PUD pages. + */ +#if defined(CONFIG_ARM64_64K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 3 +#define KVM_PREALLOC_LEVELS 2 +#define PTRS_PER_S2_PGD 1 +#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) + + +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) +{ + pud_t *pud; + pmd_t *pmd; + + pud = pud_offset(pgd, 0); + pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, 0); + + if (!pmd) + return -ENOMEM; + memset(pmd, 0, PAGE_SIZE); + pud_populate(NULL, pud, pmd); + get_page(virt_to_page(pud)); + + return 0; +} +static inline void kvm_free_hwpgd(struct kvm *kvm) +{ + pgd_t *pgd = kvm->arch.pgd; + pud_t *pud = pud_offset(pgd, 0); + pmd_t *pmd = pmd_offset(pud, 0); + free_pages((unsigned long)pmd, 0); + put_page(virt_to_page(pud)); +} + +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm) +{ + pgd_t *pgd = kvm->arch.pgd; + pud_t *pud = pud_offset(pgd, 0); + pmd_t *pmd = pmd_offset(pud, 0); + return virt_to_phys(pmd); + +} +#elif defined(CONFIG_ARM64_4K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4 +#define KVM_PREALLOC_LEVELS 1 +#define PTRS_PER_S2_PGD 2 +#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) + +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) +{ + pud_t *pud; + + pud = (pud_t *)__get_free_pages(GFP_KERNEL, 1); + if (!pud) + return -ENOMEM; + memset(pud, 0, 2 * PAGE_SIZE); + pgd_populate(NULL, pgd, pud); + pgd_populate(NULL, pgd + 1, pud + PTRS_PER_PUD); + get_page(virt_to_page(pgd)); + get_page(virt_to_page(pgd)); + + return 0; +} + +static inline void kvm_free_hwpgd(struct kvm *kvm) +{ + pgd_t *pgd = kvm->arch.pgd; + pud_t *pud = pud_offset(pgd, 0); + free_pages((unsigned long)pud, 1); + put_page(virt_to_page(pgd)); + put_page(virt_to_page(pgd)); +} + +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm) +{ + pgd_t *pgd = kvm->arch.pgd; + pud_t *pud = pud_offset(pgd, 0); + return virt_to_phys(pud); +} +#else +#define KVM_PREALLOC_LEVELS 0 +#define PTRS_PER_S2_PGD (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT)) +#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) + +static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) +{ + return 0; +} + +static inline void kvm_free_hwpgd(struct kvm *kvm) { } + +static inline phys_addr_t kvm_get_hwpgd(struct kvm *kvm) +{ + return virt_to_phys(kvm->arch.pgd); +} +#endif struct kvm;