From patchwork Thu May 9 11:40:11 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: PranavkumarSawargaonkar X-Patchwork-Id: 16805 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f198.google.com (mail-ve0-f198.google.com [209.85.128.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6AABA238F7 for ; Thu, 9 May 2013 11:41:36 +0000 (UTC) Received: by mail-ve0-f198.google.com with SMTP id 15sf3685777vea.9 for ; Thu, 09 May 2013 04:41:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=8hUhkkdvyZ+DIz7bn4Or0I6Vt5mb+ptNtWK4VEAs29U=; b=pHTWKBGaDSfbbHfASCVvcTaawVdD2h6q6+R9f2hLk/qyFIwPeU4pp1zL++0XtPCPHN HooFpEtqHjXuY/6aAwmJg8TmlSiPeaCakSY9au4pLfqkaRspfj1bwinYdXcjTgCPYM63 7QcjOSP/C1DmpPqs9NqFdJ7rYKp1e1m40//kqj8x+rDNI1b/RrNE1ZSCte/0TdlcZTgm f7QeE7zX9PMCovhzBehimI5/IZYfQJuRIZYVvkgsGTMPKyzUrc+mQVp00L07+3cyw3lt p1Nes6kOHvnKE2A6DGcSLwr3BWy9PAcxfYroQf8GIRLbOclCioqrZg41gqpJcr5FT3Eq IB/w== X-Received: by 10.236.170.36 with SMTP id o24mr6498371yhl.2.1368099672686; Thu, 09 May 2013 04:41:12 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.35.136 with SMTP id h8ls1460847qej.31.gmail; Thu, 09 May 2013 04:41:12 -0700 (PDT) X-Received: by 10.52.110.133 with SMTP id ia5mr6372002vdb.129.1368099672460; Thu, 09 May 2013 04:41:12 -0700 (PDT) Received: from mail-vb0-x22b.google.com (mail-vb0-x22b.google.com [2607:f8b0:400c:c02::22b]) by mx.google.com with ESMTPS id zr5si1300137vec.17.2013.05.09.04.41.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 09 May 2013 04:41:12 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22b is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22b; Received: by mail-vb0-f43.google.com with SMTP id p14so2432494vbm.2 for ; Thu, 09 May 2013 04:41:12 -0700 (PDT) X-Received: by 10.52.175.200 with SMTP id cc8mr6411478vdc.94.1368099672326; Thu, 09 May 2013 04:41:12 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.217.234 with SMTP id pb10csp11465vec; Thu, 9 May 2013 04:41:11 -0700 (PDT) X-Received: by 10.68.43.227 with SMTP id z3mr12170454pbl.217.1368099671215; Thu, 09 May 2013 04:41:11 -0700 (PDT) Received: from mail-pa0-f47.google.com (mail-pa0-f47.google.com [209.85.220.47]) by mx.google.com with ESMTPS id tw9si2037710pac.129.2013.05.09.04.41.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 09 May 2013 04:41:11 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.47 is neither permitted nor denied by best guess record for domain of pranavkumar@linaro.org) client-ip=209.85.220.47; Received: by mail-pa0-f47.google.com with SMTP id kl13so2055582pab.34 for ; Thu, 09 May 2013 04:41:10 -0700 (PDT) X-Received: by 10.66.9.99 with SMTP id y3mr12617129paa.189.1368099670268; Thu, 09 May 2013 04:41:10 -0700 (PDT) Received: from pnqlab006.amcc.com ([182.72.18.82]) by mx.google.com with ESMTPSA id vv6sm3401274pab.6.2013.05.09.04.41.06 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 09 May 2013 04:41:09 -0700 (PDT) From: Pranavkumar Sawargaonkar To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org, patches@linaro.org, marc.zyngier@arm.com, Pranavkumar Sawargaonkar , Anup Patel Subject: [PATCH] arm64: KVM: Fix HCR_EL2 and VTCR_EL2 configuration bits Date: Thu, 9 May 2013 17:10:11 +0530 Message-Id: <1368099611-4738-1-git-send-email-pranavkumar@linaro.org> X-Mailer: git-send-email 1.7.9.5 X-Gm-Message-State: ALoCoQmDVYjIoXibedG0kD8e6xOGtuAk4hCPeRGSG79KjIAIx+N4Sk1CWCzGWg07pRG0yI0/grJe X-Original-Sender: pranavkumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22b is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch does following fixes: 1. Make HCR_* flags as unsigned long long constants Reason : By default, compiler assumes numeric constants as signed hence sign extending it when assigned to unsigned variable such as hcr_el2 (in VCPU context). This accidently sets HCR_ID and HCR_CD making entire guest memory non-cacheable. On real HW, this breaks Stage2 ttbl walks and also breaks VirtIO. 2. VTCR_EL2_ORGN0_WBWA and VTCR_EL2_IRGN0_WBWA macros. Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Anup Patel --- arch/arm64/include/asm/kvm_arm.h | 73 +++++++++++++++++++------------------- 1 file changed, 37 insertions(+), 36 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 8ced0ca..0a951db 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -18,44 +18,45 @@ #ifndef __ARM64_KVM_ARM_H__ #define __ARM64_KVM_ARM_H__ +#include #include /* Hyp Configuration Register (HCR) bits */ -#define HCR_ID (1 << 33) -#define HCR_CD (1 << 32) +#define HCR_ID (_AC(0x1, ULL) << 33) +#define HCR_CD (_AC(0x1, ULL) << 32) #define HCR_RW_SHIFT 31 -#define HCR_RW (1 << HCR_RW_SHIFT) -#define HCR_TRVM (1 << 30) -#define HCR_HCD (1 << 29) -#define HCR_TDZ (1 << 28) -#define HCR_TGE (1 << 27) -#define HCR_TVM (1 << 26) -#define HCR_TTLB (1 << 25) -#define HCR_TPU (1 << 24) -#define HCR_TPC (1 << 23) -#define HCR_TSW (1 << 22) -#define HCR_TAC (1 << 21) -#define HCR_TIDCP (1 << 20) -#define HCR_TSC (1 << 19) -#define HCR_TID3 (1 << 18) -#define HCR_TID2 (1 << 17) -#define HCR_TID1 (1 << 16) -#define HCR_TID0 (1 << 15) -#define HCR_TWE (1 << 14) -#define HCR_TWI (1 << 13) -#define HCR_DC (1 << 12) -#define HCR_BSU (3 << 10) -#define HCR_BSU_IS (1 << 10) -#define HCR_FB (1 << 9) -#define HCR_VA (1 << 8) -#define HCR_VI (1 << 7) -#define HCR_VF (1 << 6) -#define HCR_AMO (1 << 5) -#define HCR_IMO (1 << 4) -#define HCR_FMO (1 << 3) -#define HCR_PTW (1 << 2) -#define HCR_SWIO (1 << 1) -#define HCR_VM (1) +#define HCR_RW (_AC(0x1, ULL) << HCR_RW_SHIFT) +#define HCR_TRVM (_AC(0x1, ULL) << 30) +#define HCR_HCD (_AC(0x1, ULL) << 29) +#define HCR_TDZ (_AC(0x1, ULL) << 28) +#define HCR_TGE (_AC(0x1, ULL) << 27) +#define HCR_TVM (_AC(0x1, ULL) << 26) +#define HCR_TTLB (_AC(0x1, ULL) << 25) +#define HCR_TPU (_AC(0x1, ULL) << 24) +#define HCR_TPC (_AC(0x1, ULL) << 23) +#define HCR_TSW (_AC(0x1, ULL) << 22) +#define HCR_TAC (_AC(0x1, ULL) << 21) +#define HCR_TIDCP (_AC(0x1, ULL) << 20) +#define HCR_TSC (_AC(0x1, ULL) << 19) +#define HCR_TID3 (_AC(0x1, ULL) << 18) +#define HCR_TID2 (_AC(0x1, ULL) << 17) +#define HCR_TID1 (_AC(0x1, ULL) << 16) +#define HCR_TID0 (_AC(0x1, ULL) << 15) +#define HCR_TWE (_AC(0x1, ULL) << 14) +#define HCR_TWI (_AC(0x1, ULL) << 13) +#define HCR_DC (_AC(0x1, ULL) << 12) +#define HCR_BSU (_AC(0x3, ULL) << 10) +#define HCR_BSU_IS (_AC(0x1, ULL) << 10) +#define HCR_FB (_AC(0x1, ULL) << 9) +#define HCR_VA (_AC(0x1, ULL) << 8) +#define HCR_VI (_AC(0x1, ULL) << 7) +#define HCR_VF (_AC(0x1, ULL) << 6) +#define HCR_AMO (_AC(0x1, ULL) << 5) +#define HCR_IMO (_AC(0x1, ULL) << 4) +#define HCR_FMO (_AC(0x1, ULL) << 3) +#define HCR_PTW (_AC(0x1, ULL) << 2) +#define HCR_SWIO (_AC(0x1, ULL) << 1) +#define HCR_VM (_AC(0x1, ULL)) /* * The bits we set in HCR: @@ -111,9 +112,9 @@ #define VTCR_EL2_SH0_MASK (3 << 12) #define VTCR_EL2_SH0_INNER (3 << 12) #define VTCR_EL2_ORGN0_MASK (3 << 10) -#define VTCR_EL2_ORGN0_WBWA (3 << 10) +#define VTCR_EL2_ORGN0_WBWA (1 << 10) #define VTCR_EL2_IRGN0_MASK (3 << 8) -#define VTCR_EL2_IRGN0_WBWA (3 << 8) +#define VTCR_EL2_IRGN0_WBWA (1 << 8) #define VTCR_EL2_SL0_MASK (3 << 6) #define VTCR_EL2_SL0_LVL1 (1 << 6) #define VTCR_EL2_T0SZ_MASK 0x3f