From patchwork Tue Apr 7 12:51:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 46826 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A9F0620553 for ; Tue, 7 Apr 2015 13:06:42 +0000 (UTC) Received: by lbbrr5 with SMTP id rr5sf10334849lbb.3 for ; Tue, 07 Apr 2015 06:06:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:in-reply-to:references :sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=W189PtfNhcU1JM6z+563LJl+nDaCpEvi9TEt6USs6vA=; b=aO+CLSYuYIpiRtcQNhHSlMvNLlm3SuSmROe5cC5HOrebNX3JR2g+6rLv65xcnM1hwM 3xLe4nye4KFEtoLfb/ab9lURt0B3wPtfT5dPrjuO9FrER6l3XTqbcsM8EarxXa7MtTun Am06gAdbZcTCHMqBBQPiXoTrykO6f+y2B1lVBCeKEp6gNt4sABRZl/ud/3WKhEXOgcE9 ykKQSHKwQ57Sx9Gs/KliGceN3BBgiwN0/QmhsEbZduSsnFPBA25domTjD3JCConK0Fnb T3+wRQGMnGY4zQ5VHNrG0acHmwUNn1/tmsTXxnLpzV+VlCWJJWTdnP30Gvo9CQCmTgIl 2nYw== X-Gm-Message-State: ALoCoQkzzgtpAGwbgWTTQAAMA5CY9qoMmkhkwUpcFXJm8gVPLROumwzxtCw1CajYTYJNQnKEGOaj X-Received: by 10.112.17.105 with SMTP id n9mr4051948lbd.8.1428412001628; Tue, 07 Apr 2015 06:06:41 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.227 with SMTP id t3ls41364laj.72.gmail; Tue, 07 Apr 2015 06:06:41 -0700 (PDT) X-Received: by 10.112.136.232 with SMTP id qd8mr18134520lbb.1.1428412001444; Tue, 07 Apr 2015 06:06:41 -0700 (PDT) Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com. [209.85.217.172]) by mx.google.com with ESMTPS id ju20si5979399lab.155.2015.04.07.06.06.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Apr 2015 06:06:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) client-ip=209.85.217.172; Received: by lbbzk7 with SMTP id zk7so42024284lbb.0 for ; Tue, 07 Apr 2015 06:06:41 -0700 (PDT) X-Received: by 10.152.8.78 with SMTP id p14mr15676644laa.19.1428412001336; Tue, 07 Apr 2015 06:06:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp413058lbt; Tue, 7 Apr 2015 06:06:40 -0700 (PDT) X-Received: by 10.68.189.225 with SMTP id gl1mr35800042pbc.91.1428411999018; Tue, 07 Apr 2015 06:06:39 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n2si11283241pdk.246.2015.04.07.06.06.36; Tue, 07 Apr 2015 06:06:39 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754991AbbDGNFz (ORCPT + 27 others); Tue, 7 Apr 2015 09:05:55 -0400 Received: from ip4-83-240-67-251.cust.nbox.cz ([83.240.67.251]:54544 "EHLO ip4-83-240-18-248.cust.nbox.cz" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754130AbbDGMwI (ORCPT ); Tue, 7 Apr 2015 08:52:08 -0400 Received: from ku by ip4-83-240-18-248.cust.nbox.cz with local (Exim 4.85) (envelope-from ) id 1YfSz8-00021a-Da; Tue, 07 Apr 2015 14:52:06 +0200 From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Steven Capper , Russell King , Jiri Slaby Subject: [PATCH 3.12 111/155] ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE Date: Tue, 7 Apr 2015 14:51:20 +0200 Message-Id: X-Mailer: git-send-email 2.3.4 In-Reply-To: <9a548862b8a26cbccc14f2c6c9c3688813d8d14b.1428411003.git.jslaby@suse.cz> References: <9a548862b8a26cbccc14f2c6c9c3688813d8d14b.1428411003.git.jslaby@suse.cz> In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: linux-kernel-owner@vger.kernel.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Steven Capper 3.12-stable review patch. If anyone has any objections, please let me know. =============== commit ded9477984690d026e46dd75e8157392cea3f13f upstream. For LPAE, we have the following means for encoding writable or dirty ptes: L_PTE_DIRTY L_PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writeable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writeable but not dirty. This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58, and adds additional logic to set AP[2] whenever the pte is read only or not dirty. That way we can distinguish between clean writeable ptes and read only ptes. HugeTLB pages will use this new logic automatically. We need to add some logic to Transparent HugePages to ensure that they correctly interpret the revised pgprot permissions (L_PTE_RDONLY has moved and no longer matches PMD_SECT_AP2). In the process of revising THP, the names of the PMD software bits have been prefixed with L_ to make them easier to distinguish from their hardware bit counterparts. Signed-off-by: Steve Capper Reviewed-by: Will Deacon Signed-off-by: Russell King Signed-off-by: Jiri Slaby [dump.c is not in 3.12] --- arch/arm/include/asm/pgtable-3level-hwdef.h | 3 ++- arch/arm/include/asm/pgtable-3level.h | 41 +++++++++++++++++------------ arch/arm/mm/proc-v7-3level.S | 9 +++++-- 3 files changed, 33 insertions(+), 20 deletions(-) diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index 626989fec4d3..9fd61c72a33a 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -43,7 +43,7 @@ #define PMD_SECT_BUFFERABLE (_AT(pmdval_t, 1) << 2) #define PMD_SECT_CACHEABLE (_AT(pmdval_t, 1) << 3) #define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */ -#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */ +#define PMD_SECT_AP2 (_AT(pmdval_t, 1) << 7) /* read only */ #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) #define PMD_SECT_nG (_AT(pmdval_t, 1) << 11) @@ -72,6 +72,7 @@ #define PTE_TABLE_BIT (_AT(pteval_t, 1) << 1) #define PTE_BUFFERABLE (_AT(pteval_t, 1) << 2) /* AttrIndx[0] */ #define PTE_CACHEABLE (_AT(pteval_t, 1) << 3) /* AttrIndx[1] */ +#define PTE_AP2 (_AT(pteval_t, 1) << 7) /* AP[2] */ #define PTE_EXT_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ #define PTE_EXT_AF (_AT(pteval_t, 1) << 10) /* Access Flag */ #define PTE_EXT_NG (_AT(pteval_t, 1) << 11) /* nG */ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 738a2e346e1c..6a171d0afc12 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -79,18 +79,19 @@ #define L_PTE_PRESENT (_AT(pteval_t, 3) << 0) /* Present */ #define L_PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !PRESENT */ #define L_PTE_USER (_AT(pteval_t, 1) << 6) /* AP[1] */ -#define L_PTE_RDONLY (_AT(pteval_t, 1) << 7) /* AP[2] */ #define L_PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ #define L_PTE_YOUNG (_AT(pteval_t, 1) << 10) /* AF */ #define L_PTE_XN (_AT(pteval_t, 1) << 54) /* XN */ -#define L_PTE_DIRTY (_AT(pteval_t, 1) << 55) /* unused */ -#define L_PTE_SPECIAL (_AT(pteval_t, 1) << 56) /* unused */ +#define L_PTE_DIRTY (_AT(pteval_t, 1) << 55) +#define L_PTE_SPECIAL (_AT(pteval_t, 1) << 56) #define L_PTE_NONE (_AT(pteval_t, 1) << 57) /* PROT_NONE */ +#define L_PTE_RDONLY (_AT(pteval_t, 1) << 58) /* READ ONLY */ -#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0) -#define PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) -#define PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 56) -#define PMD_SECT_NONE (_AT(pmdval_t, 1) << 57) +#define L_PMD_SECT_VALID (_AT(pmdval_t, 1) << 0) +#define L_PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) +#define L_PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 56) +#define L_PMD_SECT_NONE (_AT(pmdval_t, 1) << 57) +#define L_PMD_SECT_RDONLY (_AT(pteval_t, 1) << 58) /* * To be used in assembly code with the upper page attributes. @@ -211,21 +212,22 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) #define pmd_young(pmd) (pmd_isset((pmd), PMD_SECT_AF)) #define __HAVE_ARCH_PMD_WRITE -#define pmd_write(pmd) (pmd_isclear((pmd), PMD_SECT_RDONLY)) +#define pmd_write(pmd) (pmd_isclear((pmd), L_PMD_SECT_RDONLY)) +#define pmd_dirty(pmd) (pmd_isset((pmd), L_PMD_SECT_DIRTY)) #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define pmd_trans_huge(pmd) (pmd_val(pmd) && !pmd_table(pmd)) -#define pmd_trans_splitting(pmd) (pmd_isset((pmd), PMD_SECT_SPLITTING)) +#define pmd_trans_splitting(pmd) (pmd_isset((pmd), L_PMD_SECT_SPLITTING)) #endif #define PMD_BIT_FUNC(fn,op) \ static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } -PMD_BIT_FUNC(wrprotect, |= PMD_SECT_RDONLY); +PMD_BIT_FUNC(wrprotect, |= L_PMD_SECT_RDONLY); PMD_BIT_FUNC(mkold, &= ~PMD_SECT_AF); -PMD_BIT_FUNC(mksplitting, |= PMD_SECT_SPLITTING); -PMD_BIT_FUNC(mkwrite, &= ~PMD_SECT_RDONLY); -PMD_BIT_FUNC(mkdirty, |= PMD_SECT_DIRTY); +PMD_BIT_FUNC(mksplitting, |= L_PMD_SECT_SPLITTING); +PMD_BIT_FUNC(mkwrite, &= ~L_PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkdirty, |= L_PMD_SECT_DIRTY); PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) @@ -239,8 +241,8 @@ PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { - const pmdval_t mask = PMD_SECT_USER | PMD_SECT_XN | PMD_SECT_RDONLY | - PMD_SECT_VALID | PMD_SECT_NONE; + const pmdval_t mask = PMD_SECT_USER | PMD_SECT_XN | L_PMD_SECT_RDONLY | + L_PMD_SECT_VALID | L_PMD_SECT_NONE; pmd_val(pmd) = (pmd_val(pmd) & ~mask) | (pgprot_val(newprot) & mask); return pmd; } @@ -251,8 +253,13 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, BUG_ON(addr >= TASK_SIZE); /* create a faulting entry if PROT_NONE protected */ - if (pmd_val(pmd) & PMD_SECT_NONE) - pmd_val(pmd) &= ~PMD_SECT_VALID; + if (pmd_val(pmd) & L_PMD_SECT_NONE) + pmd_val(pmd) &= ~L_PMD_SECT_VALID; + + if (pmd_write(pmd) && pmd_dirty(pmd)) + pmd_val(pmd) &= ~PMD_SECT_AP2; + else + pmd_val(pmd) |= PMD_SECT_AP2; *pmdp = __pmd(pmd_val(pmd) | PMD_SECT_nG); flush_pmd_entry(pmdp); diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S index 22e3ad63500c..eb81123a845d 100644 --- a/arch/arm/mm/proc-v7-3level.S +++ b/arch/arm/mm/proc-v7-3level.S @@ -86,8 +86,13 @@ ENTRY(cpu_v7_set_pte_ext) tst rh, #1 << (57 - 32) @ L_PTE_NONE bicne rl, #L_PTE_VALID bne 1f - tst rh, #1 << (55 - 32) @ L_PTE_DIRTY - orreq rl, #L_PTE_RDONLY + + eor ip, rh, #1 << (55 - 32) @ toggle L_PTE_DIRTY in temp reg to + @ test for !L_PTE_DIRTY || L_PTE_RDONLY + tst ip, #1 << (55 - 32) | 1 << (58 - 32) + orrne rl, #PTE_AP2 + biceq rl, #PTE_AP2 + 1: strd r2, r3, [r0] ALT_SMP(W(nop)) ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte