From patchwork Thu Apr 3 08:59:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 27665 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f71.google.com (mail-ee0-f71.google.com [74.125.83.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6DC3120490 for ; Thu, 3 Apr 2014 09:01:37 +0000 (UTC) Received: by mail-ee0-f71.google.com with SMTP id b57sf1384946eek.6 for ; Thu, 03 Apr 2014 02:01:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=R5K2IQOaUrES/877Qh5UFYz4tulw5NrKTGccJMXagUg=; b=I32jgIJJy+FsAZGL1jTVT3I6VAyMqx3ZXrSWGCR2WLzMzV6qF7dzDmFQSqfnOACwow +B/juvWq2gdIwCvkJ84zOLiztHoBuMQY+baaamkQrmtNl5DvfFPiLytv3mgMgWTA85w4 DMcBgmPsD/2v26tnM8oP+B66Heet6cfio5mjcNxOMbL65PgL+ISgbmvIL+USRGWc6hv3 gkPxby4aGQSj8gRXnjjAQJxxn1baMY8ZM8jLIS3gzIwBcZv2FX9OZHlsLMKnbOSG89eM 53MGqHO7X2RqefWPn8UltCH7SW8joCfGmVa/+D+IbuKQ3vlLauyU3OELjvQ+Jm/yq0xh ql9A== X-Gm-Message-State: ALoCoQmQiDTqJ8N9Oew+QnfCTRHrDxd91GWjeAkTxbU40FVShY3uAg3S7S4idvVu6uSuS4Dn9zfd X-Received: by 10.15.34.194 with SMTP id e42mr1281780eev.0.1396515696351; Thu, 03 Apr 2014 02:01:36 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.39.133 with SMTP id v5ls543358qgv.23.gmail; Thu, 03 Apr 2014 02:01:36 -0700 (PDT) X-Received: by 10.52.95.135 with SMTP id dk7mr5605594vdb.32.1396515696237; Thu, 03 Apr 2014 02:01:36 -0700 (PDT) Received: from mail-ve0-f175.google.com (mail-ve0-f175.google.com [209.85.128.175]) by mx.google.com with ESMTPS id la9si122363veb.200.2014.04.03.02.01.36 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Apr 2014 02:01:36 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.175; Received: by mail-ve0-f175.google.com with SMTP id oz11so48368veb.6 for ; Thu, 03 Apr 2014 02:01:36 -0700 (PDT) X-Received: by 10.52.51.197 with SMTP id m5mr5674519vdo.9.1396515696151; Thu, 03 Apr 2014 02:01:36 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.12.8 with SMTP id v8csp11180vcv; Thu, 3 Apr 2014 02:01:35 -0700 (PDT) X-Received: by 10.52.95.135 with SMTP id dk7mr5605505vdb.32.1396515694548; Thu, 03 Apr 2014 02:01:34 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id fc7si1145327vcb.24.2014.04.03.02.01.33 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 03 Apr 2014 02:01:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WVdV3-00059n-OF; Thu, 03 Apr 2014 08:59:53 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WVdV1-00057D-MZ for xen-devel@lists.xen.org; Thu, 03 Apr 2014 08:59:51 +0000 Received: from [85.158.139.211:3777] by server-13.bemta-5.messagelabs.com id 19/74-16341-6032D335; Thu, 03 Apr 2014 08:59:50 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-13.tower-206.messagelabs.com!1396515587!5163521!3 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.11.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 23622 invoked from network); 3 Apr 2014 08:59:50 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 3 Apr 2014 08:59:50 -0000 X-IronPort-AV: E=Sophos;i="4.97,785,1389744000"; d="scan'208";a="116372301" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 03 Apr 2014 08:59:49 +0000 Received: from norwich.cam.xci-test.com (10.80.248.129) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 3 Apr 2014 04:59:46 -0400 Received: from drall.uk.xensource.com ([10.80.16.71] helo=drall.uk.xensource.com.) by norwich.cam.xci-test.com with esmtp (Exim 4.72) (envelope-from ) id 1WVdUw-0001BA-2I; Thu, 03 Apr 2014 08:59:46 +0000 From: Ian Campbell To: Date: Thu, 3 Apr 2014 09:59:40 +0100 Message-ID: <1396515585-5737-1-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1396515560.4211.33.camel@kazak.uk.xensource.com> References: <1396515560.4211.33.camel@kazak.uk.xensource.com> MIME-Version: 1.0 X-DLP: MIA1 Cc: julien.grall@linaro.org, tim@xen.org, Ian Campbell , stefano.stabellini@eu.citrix.com Subject: [Xen-devel] [PATCH v4 1/6] xen: arm: clarify naming of the Xen TLB flushing functions X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: All of the flush_xen_*_tlb functions operate on the local processor only. Add _local to the name and update the comments to clarify. Signed-off-by: Ian Campbell Acked-by: Julien Grall --- v3: flush_xen_data_tlb_local_range_va => flush_xen_data_tlb_range_va_local fixed missing subst in setup_pagetables --- xen/arch/arm/mm.c | 24 ++++++++++++------------ xen/include/asm-arm/arm32/page.h | 21 +++++++++++++-------- xen/include/asm-arm/arm64/page.h | 20 ++++++++++++-------- 3 files changed, 37 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 3161d79..d523f77 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -215,7 +215,7 @@ void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes) pte.pt.table = 1; /* 4k mappings always have this bit set */ pte.pt.xn = 1; write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte); - flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE); + flush_xen_data_tlb_range_va_local(FIXMAP_ADDR(map), PAGE_SIZE); } /* Remove a mapping from a fixmap entry */ @@ -223,7 +223,7 @@ void clear_fixmap(unsigned map) { lpae_t pte = {0}; write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte); - flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE); + flush_xen_data_tlb_range_va_local(FIXMAP_ADDR(map), PAGE_SIZE); } #ifdef CONFIG_DOMAIN_PAGE @@ -301,7 +301,7 @@ void *map_domain_page(unsigned long mfn) * We may not have flushed this specific subpage at map time, * since we only flush the 4k page not the superpage */ - flush_xen_data_tlb_range_va(va, PAGE_SIZE); + flush_xen_data_tlb_range_va_local(va, PAGE_SIZE); return (void *)va; } @@ -403,7 +403,7 @@ void __init remove_early_mappings(void) { lpae_t pte = {0}; write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte); - flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, SECOND_SIZE); + flush_xen_data_tlb_range_va_local(BOOT_FDT_VIRT_START, SECOND_SIZE); } extern void relocate_xen(uint64_t ttbr, void *src, void *dst, size_t len); @@ -421,7 +421,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr) dest_va = BOOT_RELOC_VIRT_START; pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT, WRITEALLOC); write_pte(xen_second + second_table_offset(dest_va), pte); - flush_xen_data_tlb_range_va(dest_va, SECOND_SIZE); + flush_xen_data_tlb_range_va_local(dest_va, SECOND_SIZE); /* Calculate virt-to-phys offset for the new location */ phys_offset = xen_paddr - (unsigned long) _start; @@ -473,7 +473,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr) dest_va = BOOT_RELOC_VIRT_START; pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT, WRITEALLOC); write_pte(boot_second + second_table_offset(dest_va), pte); - flush_xen_data_tlb_range_va(dest_va, SECOND_SIZE); + flush_xen_data_tlb_range_va_local(dest_va, SECOND_SIZE); #ifdef CONFIG_ARM_64 ttbr = (uintptr_t) xen_pgtable + phys_offset; #else @@ -521,7 +521,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr) /* From now on, no mapping may be both writable and executable. */ WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2); /* Flush everything after setting WXN bit. */ - flush_xen_text_tlb(); + flush_xen_text_tlb_local(); #ifdef CONFIG_ARM_32 per_cpu(xen_pgtable, 0) = cpu0_pgtable; @@ -594,7 +594,7 @@ void __cpuinit mmu_init_secondary_cpu(void) { /* From now on, no mapping may be both writable and executable. */ WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2); - flush_xen_text_tlb(); + flush_xen_text_tlb_local(); } /* Create Xen's mappings of memory. @@ -622,7 +622,7 @@ static void __init create_32mb_mappings(lpae_t *second, write_pte(p + i, pte); pte.pt.base += 1 << LPAE_SHIFT; } - flush_xen_data_tlb(); + flush_xen_data_tlb_local(); } #ifdef CONFIG_ARM_32 @@ -701,7 +701,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn, vaddr += FIRST_SIZE; } - flush_xen_data_tlb(); + flush_xen_data_tlb_local(); } #endif @@ -845,7 +845,7 @@ static int create_xen_entries(enum xenmap_operation op, BUG(); } } - flush_xen_data_tlb_range_va(virt, PAGE_SIZE * nr_mfns); + flush_xen_data_tlb_range_va_local(virt, PAGE_SIZE * nr_mfns); rc = 0; @@ -908,7 +908,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg) } write_pte(xen_xenmap + i, pte); } - flush_xen_text_tlb(); + flush_xen_text_tlb_local(); } /* Release all __init and __initdata ranges to be reused */ diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h index 191a108..b0a2025 100644 --- a/xen/include/asm-arm/arm32/page.h +++ b/xen/include/asm-arm/arm32/page.h @@ -27,13 +27,15 @@ static inline void write_pte(lpae_t *p, lpae_t pte) #define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC) /* - * Flush all hypervisor mappings from the TLB and branch predictor. + * Flush all hypervisor mappings from the TLB and branch predictor of + * the local processor. + * * This is needed after changing Xen code mappings. * * The caller needs to issue the necessary DSB and D-cache flushes * before calling flush_xen_text_tlb. */ -static inline void flush_xen_text_tlb(void) +static inline void flush_xen_text_tlb_local(void) { register unsigned long r0 asm ("r0"); asm volatile ( @@ -47,10 +49,11 @@ static inline void flush_xen_text_tlb(void) } /* - * Flush all hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush all hypervisor mappings from the data TLB of the local + * processor. This is not sufficient when changing code mappings or + * for self modifying code. */ -static inline void flush_xen_data_tlb(void) +static inline void flush_xen_data_tlb_local(void) { register unsigned long r0 asm ("r0"); asm volatile("dsb;" /* Ensure preceding are visible */ @@ -61,10 +64,12 @@ static inline void flush_xen_data_tlb(void) } /* - * Flush a range of VA's hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush a range of VA's hypervisor mappings from the data TLB of the + * local processor. This is not sufficient when changing code mappings + * or for self modifying code. */ -static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size) +static inline void flush_xen_data_tlb_range_va_local(unsigned long va, + unsigned long size) { unsigned long end = va + size; dsb(sy); /* Ensure preceding are visible */ diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h index 20b4c5a..65332a3 100644 --- a/xen/include/asm-arm/arm64/page.h +++ b/xen/include/asm-arm/arm64/page.h @@ -22,13 +22,14 @@ static inline void write_pte(lpae_t *p, lpae_t pte) #define __clean_and_invalidate_xen_dcache_one(R) "dc civac, %" #R ";" /* - * Flush all hypervisor mappings from the TLB + * Flush all hypervisor mappings from the TLB of the local processor. + * * This is needed after changing Xen code mappings. * * The caller needs to issue the necessary DSB and D-cache flushes * before calling flush_xen_text_tlb. */ -static inline void flush_xen_text_tlb(void) +static inline void flush_xen_text_tlb_local(void) { asm volatile ( "isb;" /* Ensure synchronization with previous changes to text */ @@ -40,10 +41,11 @@ static inline void flush_xen_text_tlb(void) } /* - * Flush all hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush all hypervisor mappings from the data TLB of the local + * processor. This is not sufficient when changing code mappings or + * for self modifying code. */ -static inline void flush_xen_data_tlb(void) +static inline void flush_xen_data_tlb_local(void) { asm volatile ( "dsb sy;" /* Ensure visibility of PTE writes */ @@ -54,10 +56,12 @@ static inline void flush_xen_data_tlb(void) } /* - * Flush a range of VA's hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush a range of VA's hypervisor mappings from the data TLB of the + * local processor. This is not sufficient when changing code mappings + * or for self modifying code. */ -static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size) +static inline void flush_xen_data_tlb_range_va_local(unsigned long va, + unsigned long size) { unsigned long end = va + size; dsb(sy); /* Ensure preceding are visible */