From patchwork Tue Feb 4 14:22:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 24105 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ig0-f197.google.com (mail-ig0-f197.google.com [209.85.213.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5C00A202FA for ; Tue, 4 Feb 2014 14:24:23 +0000 (UTC) Received: by mail-ig0-f197.google.com with SMTP id j1sf16000415iga.0 for ; Tue, 04 Feb 2014 06:24:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=o2PkUiSByktWCWY1iJLximiQtU2h7JSpZlCRTEvmjT8=; b=dw9Fnwcx32knmRAueo3O5CsHReiKNbCcTvR0a6IxmZHd7TSBBGt65xDjTqISt4xAmw JVzciVbQ1Vg5jU7MXK1efOemK372vM1MKl7UpuHpMPwz6192MPmZ9fUHxt/dD4UwSWV1 XYoHqqNh++AUp+dWC0PK5p9fgzNxAD5qgbYCISd3oizNfZb3C5/mUkEYF1xpMTTvNvtZ ZF9C90kFuRomlUaLwTcp/ba1u/HlCmCvQqCRDWXUk2VrPuUJizGNZ80Ie7dCWaWLEeDM EL7zEtScNx3a9dj02XRBSI171rRn4KUtWJNzXseWy1HAawTNs8msxe2VPu/azwzNY8vz lQUA== X-Gm-Message-State: ALoCoQn/cZE2/EZn+/YnUxnuBIbP67hZrDaXt30daM2TishacVAJo745wSWSfFgRfC4KzxRbVdGj X-Received: by 10.182.87.2 with SMTP id t2mr16724266obz.2.1391523862638; Tue, 04 Feb 2014 06:24:22 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.34.76 with SMTP id k70ls2276612qgk.30.gmail; Tue, 04 Feb 2014 06:24:22 -0800 (PST) X-Received: by 10.220.110.210 with SMTP id o18mr63831vcp.49.1391523862505; Tue, 04 Feb 2014 06:24:22 -0800 (PST) Received: from mail-vb0-f50.google.com (mail-vb0-f50.google.com [209.85.212.50]) by mx.google.com with ESMTPS id uw4si414715vdc.1.2014.02.04.06.24.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Feb 2014 06:24:22 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.50 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.50; Received: by mail-vb0-f50.google.com with SMTP id w8so5940140vbj.9 for ; Tue, 04 Feb 2014 06:24:22 -0800 (PST) X-Received: by 10.58.85.133 with SMTP id h5mr32723339vez.4.1391523862403; Tue, 04 Feb 2014 06:24:22 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp238148vcz; Tue, 4 Feb 2014 06:24:21 -0800 (PST) X-Received: by 10.224.171.74 with SMTP id g10mr67711762qaz.62.1391523861577; Tue, 04 Feb 2014 06:24:21 -0800 (PST) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id p10si4117693qag.54.2014.02.04.06.24.20 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 04 Feb 2014 06:24:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WAgtW-0006FU-L7; Tue, 04 Feb 2014 14:22:34 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WAgtU-0006EO-Je for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:32 +0000 Received: from [85.158.143.35:23663] by server-1.bemta-4.messagelabs.com id 14/4D-31661-7A7F0F25; Tue, 04 Feb 2014 14:22:31 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-16.tower-21.messagelabs.com!1391523749!3058027!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.9.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 647 invoked from network); 4 Feb 2014 14:22:30 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 4 Feb 2014 14:22:30 -0000 X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743721" Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:27 +0000 Received: from norwich.cam.xci-test.com (10.80.248.129) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500 Received: from drall.uk.xensource.com ([10.80.16.71] helo=drall.uk.xensource.com.) by norwich.cam.xci-test.com with esmtp (Exim 4.72) (envelope-from ) id 1WAgtO-0004zY-1U; Tue, 04 Feb 2014 14:22:26 +0000 From: Ian Campbell To: Date: Tue, 4 Feb 2014 14:22:24 +0000 Message-ID: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com> References: <1391523701.5635.6.camel@kazak.uk.xensource.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: keir@xen.org, Ian Campbell , stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com Subject: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.50 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: Guests are initially started with caches disabled and so we need to make sure they see consistent data in RAM (requiring a cache clean) but also that they do not have old stale data suddenly appear in the caches when they enable their caches (requiring the invalidate). This can be split into two halves. First we must flush each page as it is allocated to the guest. It is not sufficient to do the flush at scrub time since this will miss pages which are ballooned out by the guest (where the guest must scrub if it cares about not leaking the pagecontent). We need to clean as well as invalidate to make sure that any scrubbing which has occured gets committed to real RAM. To achieve this add a new cacheflush_page function, which is a stub on x86. Secondly we need to flush anything which the domain builder touches, which we do via a new domctl. Signed-off-by: Ian Campbell Cc: jbeulich@suse.com Cc: keir@xen.org Cc: ian.jackson@eu.citrix.com --- v2: Switch to cleaning at page allocation time + explicit flushing of tha eregions which the toolstack touches. Add XSM for new domctl. New domctl restricts the amount of space it is willing to flush, to avoid thinking about preemption. --- tools/libxc/xc_dom_boot.c | 4 ++++ tools/libxc/xc_dom_core.c | 3 +++ tools/libxc/xc_domain.c | 10 ++++++++++ tools/libxc/xc_private.c | 2 ++ tools/libxc/xenctrl.h | 3 ++- xen/arch/arm/domctl.c | 14 ++++++++++++++ xen/arch/arm/mm.c | 9 +++++++++ xen/arch/arm/p2m.c | 24 ++++++++++++++++++++++++ xen/common/page_alloc.c | 5 +++++ xen/include/asm-arm/p2m.h | 3 +++ xen/include/asm-arm/page.h | 3 +++ xen/include/asm-x86/page.h | 3 +++ xen/include/public/domctl.h | 13 +++++++++++++ xen/xsm/flask/hooks.c | 3 +++ xen/xsm/flask/policy/access_vectors | 2 ++ 15 files changed, 100 insertions(+), 1 deletion(-) diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c index 5a9cfc6..3b3f2fb 100644 --- a/tools/libxc/xc_dom_boot.c +++ b/tools/libxc/xc_dom_boot.c @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid, return -1; } + /* Guest shouldn't really touch its grant table until it has + * enabled its caches. But lets be nice. */ + xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1); + return 0; } diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c index 77a4e64..306b414 100644 --- a/tools/libxc/xc_dom_core.c +++ b/tools/libxc/xc_dom_core.c @@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn) prev->next = phys->next; else dom->phys_pages = phys->next; + + xc_domain_cacheflush(dom->xch, dom->guest_domid, + phys->first, phys->first + phys->count); } void xc_dom_unmap_all(struct xc_dom_image *dom) diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c index c2fdd74..092d610 100644 --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch, return 0; } +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid, + xen_pfn_t start_pfn, xen_pfn_t end_pfn) +{ + DECLARE_DOMCTL; + domctl.cmd = XEN_DOMCTL_cacheflush; + domctl.domain = (domid_t)domid; + domctl.u.cacheflush.start_pfn = start_pfn; + domctl.u.cacheflush.end_pfn = end_pfn; + return do_domctl(xch, &domctl); +} int xc_domain_pause(xc_interface *xch, uint32_t domid) diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c index 838fd21..556810f 100644 --- a/tools/libxc/xc_private.c +++ b/tools/libxc/xc_private.c @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch, return -1; memcpy(vaddr, src_page, PAGE_SIZE); munmap(vaddr, PAGE_SIZE); + xc_domain_cacheflush(xch, domid, dst_pfn, dst_pfn+1); return 0; } @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch, return -1; memset(vaddr, 0, PAGE_SIZE); munmap(vaddr, PAGE_SIZE); + xc_domain_cacheflush(xch, domid, dst_pfn, dst_pfn+1); return 0; } diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h index 13f816b..80c397e 100644 --- a/tools/libxc/xenctrl.h +++ b/tools/libxc/xenctrl.h @@ -453,7 +453,8 @@ int xc_domain_create(xc_interface *xch, xen_domain_handle_t handle, uint32_t flags, uint32_t *pdomid); - +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid, + xen_pfn_t start_pfn, xen_pfn_t end_pfn); /* Functions to produce a dump of a given domain * xc_domain_dumpcore - produces a dump to a specified file diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c index 546e86b..8916e49 100644 --- a/xen/arch/arm/domctl.c +++ b/xen/arch/arm/domctl.c @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d, { switch ( domctl->cmd ) { + case XEN_DOMCTL_cacheflush: + { + unsigned long s = domctl->u.cacheflush.start_pfn; + unsigned long e = domctl->u.cacheflush.end_pfn; + + if ( e < s ) + return -EINVAL; + + if ( get_order_from_pages(e-s) > MAX_ORDER ) + return -EINVAL; + + return p2m_cache_flush(d, s, e); + } + default: return subarch_do_domctl(domctl, d, u_domctl); } diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 2f48347..0c1a7b9 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -338,6 +338,15 @@ unsigned long domain_page_map_to_mfn(const void *va) } #endif +void cacheflush_page(unsigned long mfn) +{ + void *v = map_domain_page(mfn); + + flush_xen_dcache_va_range(v, PAGE_SIZE); + + unmap_domain_page(v); +} + void __init arch_init_memory(void) { /* diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index a61edeb..d452814 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -8,6 +8,7 @@ #include #include #include +#include /* First level P2M is 2 consecutive pages */ #define P2M_FIRST_ORDER 1 @@ -228,6 +229,7 @@ enum p2m_operation { ALLOCATE, REMOVE, RELINQUISH, + CACHEFLUSH, }; static int apply_p2m_changes(struct domain *d, @@ -381,6 +383,14 @@ static int apply_p2m_changes(struct domain *d, count++; } break; + + case CACHEFLUSH: + { + if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) ) + break; + cacheflush_page(pte.p2m.base); + } + break; } /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */ @@ -624,6 +634,20 @@ int relinquish_p2m_mapping(struct domain *d) MATTR_MEM, p2m_invalid); } +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn) +{ + struct p2m_domain *p2m = &d->arch.p2m; + + start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn); + end_mfn = MIN(end_mfn, p2m->max_mapped_gfn); + + return apply_p2m_changes(d, CACHEFLUSH, + pfn_to_paddr(start_mfn), + pfn_to_paddr(end_mfn), + pfn_to_paddr(INVALID_MFN), + MATTR_MEM, p2m_invalid); +} + unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn) { paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL); diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 5f484a2..16bb739 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages( /* Initialise fields which have other uses for free pages. */ pg[i].u.inuse.type_info = 0; page_set_owner(&pg[i], NULL); + + /* Ensure cache and RAM are consistent for platforms where the + * guest can control its own visibility of/through the cache. + */ + cacheflush_page(page_to_mfn(&pg[i])); } spin_unlock(&heap_lock); diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index e9c884a..3b39c45 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d); /* Look up the MFN corresponding to a domain's PFN. */ paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t); +/* Clean & invalidate caches corresponding to a region of guest address space */ +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn); + /* Setup p2m RAM mapping for domain d from start-end. */ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end); /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index 670d4e7..342dde8 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size) : : "r" (_p), "m" (*_p)); \ } while (0) +/* Flush the dcache for an entire page. */ +void cacheflush_page(unsigned long mfn); + /* Print a walk of an arbitrary page table */ void dump_pt_walk(lpae_t *table, paddr_t addr); diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index 7a46af5..271517f 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr) return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3); } +/* No cache maintenance required on x86 architecture. */ +static inline void cacheflush_page(unsigned long mfn) {} + /* return true if permission increased */ static inline bool_t perms_strictly_increased(uint32_t old_flags, uint32_t new_flags) diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 91f01fa..d8d8727 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn { typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t; DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t); +/* + * ARM: Clean and invalidate caches associated with given region of + * guest memory. + */ +struct xen_domctl_cacheflush { + /* IN: page range to flush: [start, end). */ + xen_pfn_t start_pfn, end_pfn; +}; +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t; +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t); + struct xen_domctl { uint32_t cmd; #define XEN_DOMCTL_createdomain 1 @@ -954,6 +965,7 @@ struct xen_domctl { #define XEN_DOMCTL_setnodeaffinity 68 #define XEN_DOMCTL_getnodeaffinity 69 #define XEN_DOMCTL_set_max_evtchn 70 +#define XEN_DOMCTL_cacheflush 71 #define XEN_DOMCTL_gdbsx_guestmemio 1000 #define XEN_DOMCTL_gdbsx_pausevcpu 1001 #define XEN_DOMCTL_gdbsx_unpausevcpu 1002 @@ -1012,6 +1024,7 @@ struct xen_domctl { struct xen_domctl_set_max_evtchn set_max_evtchn; struct xen_domctl_gdbsx_memio gdbsx_guest_memio; struct xen_domctl_set_broken_page_p2m set_broken_page_p2m; + struct xen_domctl_cacheflush cacheflush; struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu; struct xen_domctl_gdbsx_domstatus gdbsx_domstatus; uint8_t pad[128]; diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 50a35fc..1345d7e 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd) case XEN_DOMCTL_set_max_evtchn: return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN); + case XEN_DOMCTL_cacheflush: + return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH); + default: printk("flask_domctl: Unknown op %d\n", cmd); return -EPERM; diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors index 1fbe241..a0ed13d 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -196,6 +196,8 @@ class domain2 setclaim # XEN_DOMCTL_set_max_evtchn set_max_evtchn +# XEN_DOMCTL_cacheflush + cacheflush } # Similar to class domain, but primarily contains domctls related to HVM domains