From patchwork Fri Jul 18 15:08:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 33872 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f71.google.com (mail-pa0-f71.google.com [209.85.220.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id ACB23208CF for ; Fri, 18 Jul 2014 15:11:07 +0000 (UTC) Received: by mail-pa0-f71.google.com with SMTP id et14sf27885929pad.10 for ; Fri, 18 Jul 2014 08:11:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id:cc :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=Wsl0+b9P/i13DHBygk+DtPsGkstMnRzUsvNcA6oVyaA=; b=EYfK6oJ9WT/XEFsnj/zjB3LBi15MZgNGT0/+WpN8GRuCW+xrWG5+Cmb8Wq8nAcXbr0 Z8R3OAHzvSA/0oy5sYA6MUiC2kDMw9KReCfA+e9ubdvERL6/piwJOuRHWA+kYjepU/XL DupHWM9GUHV8R53dwHkWAdk6qvhnhpogJ15SIGRDCoo+ZKnAQZxGH7yZjztSkcmewJkK DzHjqeqqMWyVbVhRQU5KEFNn9FSi5zUEVEleCm2TIPqLMIPrhgQlCojVVNb74rk+L51m bHMBqBnA4x/hWGr8JbqC9RxDaGWMpenu0Qd7I4LUE3xkuUO7BWV1NIOTHT0Z/w2qoxKP 0NBw== X-Gm-Message-State: ALoCoQny7aPiLtVv5cm5k2u8akKa66ozX+bVHKcK5suCbDn9hrgHtiXBMNER1Z6u6Xije8ZINDeO X-Received: by 10.66.140.102 with SMTP id rf6mr2606747pab.17.1405696266776; Fri, 18 Jul 2014 08:11:06 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.22.198 with SMTP id 64ls983167qgn.8.gmail; Fri, 18 Jul 2014 08:11:06 -0700 (PDT) X-Received: by 10.52.115.101 with SMTP id jn5mr5522872vdb.65.1405696266600; Fri, 18 Jul 2014 08:11:06 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id qx9si1737078vcb.93.2014.07.18.08.11.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Jul 2014 08:11:06 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id hq11so6078824vcb.16 for ; Fri, 18 Jul 2014 08:11:06 -0700 (PDT) X-Received: by 10.52.129.200 with SMTP id ny8mr5582079vdb.27.1405696266373; Fri, 18 Jul 2014 08:11:06 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp19130vcb; Fri, 18 Jul 2014 08:11:05 -0700 (PDT) X-Received: by 10.66.232.166 with SMTP id tp6mr5795313pac.127.1405696263523; Fri, 18 Jul 2014 08:11:03 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id tw2si6251066pab.183.2014.07.18.08.10.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Jul 2014 08:11:03 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X89mj-0002Rh-Mt; Fri, 18 Jul 2014 15:09:21 +0000 Received: from fw-tnat.austin.arm.com ([217.140.110.23] helo=collaborate-mta1.arm.com) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X89me-0002Ku-Uu for linux-arm-kernel@lists.infradead.org; Fri, 18 Jul 2014 15:09:18 +0000 Received: from e102109-lin.cambridge.arm.com (e102109-lin.cambridge.arm.com [10.1.203.182]) by collaborate-mta1.arm.com (Postfix) with ESMTP id 752CD13F7B0; Fri, 18 Jul 2014 10:08:54 -0500 (CDT) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] arm64: Fix barriers used for page table modifications Date: Fri, 18 Jul 2014 16:08:40 +0100 Message-Id: <1405696120-22016-1-git-send-email-catalin.marinas@arm.com> X-Mailer: git-send-email 1.9.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140718_080917_096653_7B898149 X-CRM114-Status: GOOD ( 15.45 ) X-Spam-Score: -0.0 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: Will Deacon , Leif Lindholm X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: catalin.marinas@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The architecture specification states that both DSB and ISB are required between page table modifications and subsequent memory accesses using the corresponding virtual address. When TLB invalidation takes place, the tlb_flush_* functions already have the necessary barriers. However, there are other functions like create_mapping() for which this is not the case. The patch adds the DSB+ISB instructions in the set_pte() function for valid kernel mappings. The invalid pte case is handled by tlb_flush_* and the user mappings in general have a corresponding update_mmu_cache() call containing a DSB. Even when update_mmu_cache() isn't called, the kernel can still cope with an unlikely spurious page fault by re-executing the instruction. In addition, the set_pmd, set_pud() functions gain an ISB for architecture compliance when block mappings are created. Signed-off-by: Catalin Marinas Reported-by: Leif Lindholm Acked-by: Steve Capper Cc: Will Deacon Cc: Leif Lindholm --- I'll do a similar patch for arch/arm but since there is no failure currently, it's not urgent. For arm64 it fails with fixmap mappings on some platforms during boot (Leif knows the details). arch/arm64/include/asm/cacheflush.h | 11 +---------- arch/arm64/include/asm/pgtable.h | 13 +++++++++++++ arch/arm64/include/asm/tlbflush.h | 5 +++-- 3 files changed, 17 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index a5176cf32dad..f2defe1c380c 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -138,19 +138,10 @@ static inline void __flush_icache_all(void) #define flush_icache_page(vma,page) do { } while (0) /* - * flush_cache_vmap() is used when creating mappings (eg, via vmap, - * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT - * caches, since the direct-mappings of these pages may contain cached - * data, we need to do a full cache flush to ensure that writebacks - * don't corrupt data placed into these pages via the new mappings. + * Not required on AArch64 (PIPT or VIPT non-aliasing D-cache). */ static inline void flush_cache_vmap(unsigned long start, unsigned long end) { - /* - * set_pte_at() called from vmap_pte_range() does not - * have a DSB after cleaning the cache line. - */ - dsb(ish); } static inline void flush_cache_vunmap(unsigned long start, unsigned long end) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d7455fa83bc7..627b5080a4cc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -138,6 +138,8 @@ extern struct page *empty_zero_page; #define pte_valid_user(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) +#define pte_valid_not_user(pte) \ + ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) static inline pte_t pte_wrprotect(pte_t pte) { @@ -184,6 +186,15 @@ static inline pte_t pte_mkspecial(pte_t pte) static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; + + /* + * Only if the new pte is valid and kernel, otherwise TLB maintenance + * or update_mmu_cache() have the necessary barriers. + */ + if (pte_valid_not_user(pte)) { + dsb(ishst); + isb(); + } } extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); @@ -303,6 +314,7 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { *pmdp = pmd; dsb(ishst); + isb(); } static inline void pmd_clear(pmd_t *pmdp) @@ -333,6 +345,7 @@ static inline void set_pud(pud_t *pudp, pud_t pud) { *pudp = pud; dsb(ishst); + isb(); } static inline void pud_clear(pud_t *pudp) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index b9349c4513ea..3796ea6bb734 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -122,6 +122,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) asm("tlbi vaae1is, %0" : : "r"(addr)); dsb(ish); + isb(); } /* @@ -131,8 +132,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { /* - * set_pte() does not have a DSB, so make sure that the page table - * write is visible. + * set_pte() does not have a DSB for user mappings, so make sure that + * the page table write is visible. */ dsb(ishst); }