From patchwork Tue Jun 2 14:48:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 49389 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E5FD320BD1 for ; Tue, 2 Jun 2015 14:50:28 +0000 (UTC) Received: by laboh3 with SMTP id oh3sf38624328lab.0 for ; Tue, 02 Jun 2015 07:50:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=rUzcd/SCbi7T5z6nkvAtmVYKVeAb/tSZohkxEXIa48E=; b=ft5/Yn1r5dHj+EHR/vhYd+p6j7L2p3LCB28f75Rw0w2sU+oZnxp5+5ksH/zTi5gOMD WVBzUbIEcx/tH2NQMffte3DTa97lcdKhCTDWAelgRdFfdRZEHPLtRxNj789wLw5vRaMF cdCu/hurJ6PYxyOAvTndKtKyXBl5ISAOEtT9plae3nXTbiaW4TWm37CW2+wl4CeHMt3T tHXIG7rGHTu+fZiVH1cHlJYV/TlHdoCbtrI4ZBfv/v3cGThGKZ2cTPnBCLf9/j1/R4n+ v20oD9dIeU/QJJwNXKm8c5Jl0NiJkCzPYtc3EjgC0RozVESas4bT9dbosp4zIefE+TU8 4ccw== X-Gm-Message-State: ALoCoQnol+3AykTMThiXxaEAuPn66AQTsPZOBp/HWV55mKv4pvAez/kQztuZcKYofA1OGOA+Gjpc X-Received: by 10.112.55.104 with SMTP id r8mr26319986lbp.18.1433256627875; Tue, 02 Jun 2015 07:50:27 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.101 with SMTP id p5ls856689laj.53.gmail; Tue, 02 Jun 2015 07:50:27 -0700 (PDT) X-Received: by 10.152.243.9 with SMTP id wu9mr26124666lac.63.1433256627724; Tue, 02 Jun 2015 07:50:27 -0700 (PDT) Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com. [209.85.217.174]) by mx.google.com with ESMTPS id pk3si15282112lbb.75.2015.06.02.07.50.26 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Jun 2015 07:50:26 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) client-ip=209.85.217.174; Received: by lbcue7 with SMTP id ue7so106082569lbc.0 for ; Tue, 02 Jun 2015 07:50:26 -0700 (PDT) X-Received: by 10.112.220.7 with SMTP id ps7mr26181593lbc.72.1433256626665; Tue, 02 Jun 2015 07:50:26 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp3129032lbb; Tue, 2 Jun 2015 07:50:25 -0700 (PDT) X-Received: by 10.68.179.228 with SMTP id dj4mr13019028pbc.141.1433256624806; Tue, 02 Jun 2015 07:50:24 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id hv8si26697315pdb.173.2015.06.02.07.50.24; Tue, 02 Jun 2015 07:50:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759204AbbFBOuX (ORCPT + 2 others); Tue, 2 Jun 2015 10:50:23 -0400 Received: from mail-oi0-f47.google.com ([209.85.218.47]:33566 "EHLO mail-oi0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758954AbbFBOuW (ORCPT ); Tue, 2 Jun 2015 10:50:22 -0400 Received: by oiww2 with SMTP id w2so127383010oiw.0 for ; Tue, 02 Jun 2015 07:50:21 -0700 (PDT) X-Received: by 10.182.16.135 with SMTP id g7mr4811372obd.34.1433256621613; Tue, 02 Jun 2015 07:50:21 -0700 (PDT) Received: from localhost ([167.160.116.34]) by mx.google.com with ESMTPSA id f5sm9471920oes.2.2015.06.02.07.50.19 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 02 Jun 2015 07:50:20 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: gregkh@linuxfoundation.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org, Joel Schopp Subject: [PATCH for 3.14.y stable 14/32] arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc Date: Tue, 2 Jun 2015 22:48:09 +0800 Message-Id: <1433256507-7856-15-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1433256507-7856-1-git-send-email-shannon.zhao@linaro.org> References: <1433256507-7856-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Joel Schopp commit dbff124e29fa24aff9705b354b5f4648cd96e0bb upstream. The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: Catalin Marinas Signed-off-by: Joel Schopp Signed-off-by: Christoffer Dall Signed-off-by: Shannon Zhao --- arch/arm/kvm/arm.c | 4 ++-- arch/arm64/include/asm/kvm_arm.h | 13 ++++++++++++- arch/arm64/include/asm/kvm_mmu.h | 5 ++--- 3 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index df6e75e..55c1ebf 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -427,9 +427,9 @@ static void update_vttbr(struct kvm *kvm) /* update vttbr to be used with the new vmid */ pgd_phys = virt_to_phys(kvm->arch.pgd); + BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK); vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK; - kvm->arch.vttbr = pgd_phys & VTTBR_BADDR_MASK; - kvm->arch.vttbr |= vmid; + kvm->arch.vttbr = pgd_phys | vmid; spin_unlock(&kvm_vmid_lock); } diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 00fbaa7..2bc2602 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -122,6 +122,17 @@ #define VTCR_EL2_T0SZ_MASK 0x3f #define VTCR_EL2_T0SZ_40B 24 +/* + * We configure the Stage-2 page tables to always restrict the IPA space to be + * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are + * not known to exist and will break with this configuration. + * + * Note that when using 4K pages, we concatenate two first level page tables + * together. + * + * The magic numbers used for VTTBR_X in this patch can be found in Tables + * D4-23 and D4-25 in ARM DDI 0487A.b. + */ #ifdef CONFIG_ARM64_64K_PAGES /* * Stage2 translation configuration: @@ -151,7 +162,7 @@ #endif #define VTTBR_BADDR_SHIFT (VTTBR_X - 1) -#define VTTBR_BADDR_MASK (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT) +#define VTTBR_BADDR_MASK (((1LLU << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT) #define VTTBR_VMID_SHIFT (48LLU) #define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 737da74..a030d16 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -59,10 +59,9 @@ #define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET) /* - * Align KVM with the kernel's view of physical memory. Should be - * 40bit IPA, with PGD being 8kB aligned in the 4KB page configuration. + * We currently only support a 40bit IPA. */ -#define KVM_PHYS_SHIFT PHYS_MASK_SHIFT +#define KVM_PHYS_SHIFT (40) #define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT) #define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL)