From patchwork Fri Feb 27 09:11:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 45213 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8B746204BC for ; Fri, 27 Feb 2015 09:14:15 +0000 (UTC) Received: by lbiw7 with SMTP id w7sf13251645lbi.0 for ; Fri, 27 Feb 2015 01:14:14 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=B2eyGsF3dX060Nz7JlDXUgJFn7fQdtWGCdpHEt2hiio=; b=gpmbRAmHOVw9FUQV4cWgxraCW/eho19UgYPMGThExXX7YHI499NC3gzfQZPWSFnXHK RmhHfjK/nHgRQg9jqe4PlhF7fF4956vBrQkt4jRpv/TsGZjP8Hpxzd26YQK9TAJX6x7U hoYzKAApDitflExrVfHL8+GLiYrfDTRiWKzdURen/JI6IpZpH0FpjarmSruPAsflDo03 1c8E2PfVesy/3IE8gymKLCzFvtaoETIw7ikhOrUkWZKx2j15QRbKW9C+w6mfTOyDv9Jg 1NmDWnJq07h/u4G+baH2Av173mNIU50q+eFOSaRRIddiwz32acb83fkATOK8P05hboG6 jmvw== X-Gm-Message-State: ALoCoQmTQKe6jWsIhLtddBYDSELhDl43QpAL3V8ZQegB5T4wjFlKkDKQ+Yt7LJe9/wDr4dfXErzi X-Received: by 10.180.13.195 with SMTP id j3mr303920wic.2.1425028454280; Fri, 27 Feb 2015 01:14:14 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.98 with SMTP id v2ls128382lal.104.gmail; Fri, 27 Feb 2015 01:14:14 -0800 (PST) X-Received: by 10.112.198.66 with SMTP id ja2mr11363345lbc.39.1425028454009; Fri, 27 Feb 2015 01:14:14 -0800 (PST) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id p9si2435854lbr.122.2015.02.27.01.14.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Feb 2015 01:14:13 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by labgm9 with SMTP id gm9so16275568lab.2 for ; Fri, 27 Feb 2015 01:14:13 -0800 (PST) X-Received: by 10.112.181.41 with SMTP id dt9mr11830290lbc.56.1425028453899; Fri, 27 Feb 2015 01:14:13 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp3811606lbj; Fri, 27 Feb 2015 01:14:12 -0800 (PST) X-Received: by 10.70.35.15 with SMTP id d15mr22428977pdj.61.1425028451766; Fri, 27 Feb 2015 01:14:11 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id 3si4180474pdq.176.2015.02.27.01.14.10 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Feb 2015 01:14:11 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YRGyP-0006T5-9s; Fri, 27 Feb 2015 09:12:41 +0000 Received: from mail-we0-f171.google.com ([74.125.82.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YRGxx-000661-Ds for linux-arm-kernel@lists.infradead.org; Fri, 27 Feb 2015 09:12:14 +0000 Received: by wevk48 with SMTP id k48so18615647wev.0 for ; Fri, 27 Feb 2015 01:11:50 -0800 (PST) X-Received: by 10.180.37.110 with SMTP id x14mr4340236wij.45.1425028310453; Fri, 27 Feb 2015 01:11:50 -0800 (PST) Received: from ards-macbook-pro.lan (bl11-65-113.dsl.telepac.pt. [85.244.65.113]) by mx.google.com with ESMTPSA id er13sm5045603wjc.11.2015.02.27.01.11.48 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 27 Feb 2015 01:11:49 -0800 (PST) From: Ard Biesheuvel To: marc.zyngier@arm.com, christoffer.dall@linaro.org, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH resend 2/2] ARM, arm64: kvm: get rid of the bounce page Date: Fri, 27 Feb 2015 09:11:38 +0000 Message-Id: <1425028298-17289-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1425028298-17289-1-git-send-email-ard.biesheuvel@linaro.org> References: <1425028298-17289-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150227_011213_653785_A7136ED2 X-CRM114-Status: GOOD ( 21.01 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.171 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.171 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: will.deacon@arm.com, arnd@arndb.de, Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The HYP init bounce page is a runtime construct that ensures that the HYP init code does not cross a page boundary. However, this is something we can do perfectly well at build time, by aligning the code appropriately. For arm64, we just align to 4 KB, and enforce that the code size is less than 4 KB, regardless of the chosen page size. For ARM, the whole code is less than 256 bytes, so we tweak the linker script to align at a power of 2 upper bound of the code size Note that this also fixes a benign off-by-one error in the original bounce page code, where a bounce page would be allocated unnecessarily if the code was exactly 1 page in size. Signed-off-by: Ard Biesheuvel --- arch/arm/kernel/vmlinux.lds.S | 26 ++++++++++++++++++++++--- arch/arm/kvm/init.S | 3 +++ arch/arm/kvm/mmu.c | 42 +++++------------------------------------ arch/arm64/kernel/vmlinux.lds.S | 18 ++++++++++++------ 4 files changed, 43 insertions(+), 46 deletions(-) diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index 2787eb8d3616..85db1669bfe3 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -26,12 +26,28 @@ #define IDMAP_RODATA \ .rodata : { \ - . = ALIGN(32); \ + . = ALIGN(HYP_IDMAP_ALIGN); \ VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \ *(.hyp.idmap.text) \ VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \ } +/* + * If the HYP idmap .text section is populated, it needs to be positioned + * such that it will not cross a page boundary in the final output image. + * So align it to the section size rounded up to the next power of 2. + * If __hyp_idmap_size is undefined, the section will be empty so define + * it as 0 in that case. + */ +PROVIDE(__hyp_idmap_size = 0); + +#define HYP_IDMAP_ALIGN \ + __hyp_idmap_size == 0 ? 0 : \ + __hyp_idmap_size <= 0x100 ? 0x100 : \ + __hyp_idmap_size <= 0x200 ? 0x200 : \ + __hyp_idmap_size <= 0x400 ? 0x400 : \ + __hyp_idmap_size <= 0x800 ? 0x800 : 0x1000 + #ifdef CONFIG_HOTPLUG_CPU #define ARM_CPU_DISCARD(x) #define ARM_CPU_KEEP(x) x @@ -351,8 +367,12 @@ SECTIONS */ ASSERT((__proc_info_end - __proc_info_begin), "missing CPU support") ASSERT((__arch_info_end - __arch_info_begin), "no machine record defined") + /* - * The HYP init code can't be more than a page long. + * The HYP init code can't be more than a page long, + * and should not cross a page boundary. * The above comment applies as well. */ -ASSERT(((__hyp_idmap_text_end - __hyp_idmap_text_start) <= PAGE_SIZE), "HYP init code too big") +ASSERT(((__hyp_idmap_text_end - 1) & PAGE_MASK) - + (__hyp_idmap_text_start & PAGE_MASK) == 0, + "HYP init code too big or unaligned") diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S index 3988e72d16ff..11fb1d56f449 100644 --- a/arch/arm/kvm/init.S +++ b/arch/arm/kvm/init.S @@ -157,3 +157,6 @@ target: @ We're now in the trampoline code, switch page tables __kvm_hyp_init_end: .popsection + + .global __hyp_idmap_size + .set __hyp_idmap_size, __kvm_hyp_init_end - __kvm_hyp_init diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 3e6859bc3e11..42a24d6b003b 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -37,7 +37,6 @@ static pgd_t *boot_hyp_pgd; static pgd_t *hyp_pgd; static DEFINE_MUTEX(kvm_hyp_pgd_mutex); -static void *init_bounce_page; static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; static phys_addr_t hyp_idmap_vector; @@ -405,9 +404,6 @@ void free_boot_hyp_pgd(void) if (hyp_pgd) unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - free_page((unsigned long)init_bounce_page); - init_bounce_page = NULL; - mutex_unlock(&kvm_hyp_pgd_mutex); } @@ -1498,39 +1494,11 @@ int kvm_mmu_init(void) hyp_idmap_end = kvm_virt_to_phys(__hyp_idmap_text_end); hyp_idmap_vector = kvm_virt_to_phys(__kvm_hyp_init); - if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) { - /* - * Our init code is crossing a page boundary. Allocate - * a bounce page, copy the code over and use that. - */ - size_t len = __hyp_idmap_text_end - __hyp_idmap_text_start; - phys_addr_t phys_base; - - init_bounce_page = (void *)__get_free_page(GFP_KERNEL); - if (!init_bounce_page) { - kvm_err("Couldn't allocate HYP init bounce page\n"); - err = -ENOMEM; - goto out; - } - - memcpy(init_bounce_page, __hyp_idmap_text_start, len); - /* - * Warning: the code we just copied to the bounce page - * must be flushed to the point of coherency. - * Otherwise, the data may be sitting in L2, and HYP - * mode won't be able to observe it as it runs with - * caches off at that point. - */ - kvm_flush_dcache_to_poc(init_bounce_page, len); - - phys_base = kvm_virt_to_phys(init_bounce_page); - hyp_idmap_vector += phys_base - hyp_idmap_start; - hyp_idmap_start = phys_base; - hyp_idmap_end = phys_base + len; - - kvm_info("Using HYP init bounce page @%lx\n", - (unsigned long)phys_base); - } + /* + * We rely on the linker script to ensure at build time that the HYP + * init code does not cross a page boundary. + */ + BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK); hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 5d9d2dca530d..9e447f983fae 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -23,10 +23,14 @@ jiffies = jiffies_64; #define HYPERVISOR_TEXT \ /* \ - * Force the alignment to be compatible with \ - * the vectors requirements \ + * Align to 4 KB so that \ + * a) the HYP vector table is at its minimum \ + * alignment of 2048 bytes \ + * b) the HYP init code will not cross a page \ + * boundary if its size does not exceed \ + * 4 KB (see related ASSERT() below) \ */ \ - . = ALIGN(2048); \ + . = ALIGN(SZ_4K); \ VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \ *(.hyp.idmap.text) \ VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \ @@ -163,10 +167,12 @@ SECTIONS } /* - * The HYP init code can't be more than a page long. + * The HYP init code can't be more than a page long, + * and should not cross a page boundary. */ -ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end), - "HYP init code too big") +ASSERT(((__hyp_idmap_text_end - 1) & ~(SZ_4K - 1)) - + (__hyp_idmap_text_start & ~(SZ_4K - 1)) == 0, + "HYP init code too big or unaligned") /* * If padding is applied before .head.text, virt<->phys conversions will fail.