From patchwork Tue Jan 14 13:36:55 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 23184 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f71.google.com (mail-oa0-f71.google.com [209.85.219.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0D9872066C for ; Tue, 14 Jan 2014 13:37:03 +0000 (UTC) Received: by mail-oa0-f71.google.com with SMTP id i4sf24804515oah.6 for ; Tue, 14 Jan 2014 05:37:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe; bh=nKKmq/svaz7UlosztBZlsYPhXtVVoxeE6ZACL53ybqU=; b=SXaAg1lXFB44Yr3kdIJdd9UPIBSAnVxXZqdkJPsWH8vuOWjRncI/jzKkmwk5+ZAkHq QbIg7zPIwkQCQ3B9jhe7yRu7aZszRQCgaZelg4GhsqgsQ9nN+kga79g1ATBIZ30ojXYk dG3T6GekUL7Fzdjsppom3RfBxz5YBOlEeJEt/duYW+Ru21Ng/YTmVo+L5yQSVO4cFdjf VCsSm2CX5n3w2ldzCDJ8vZKGvO9+nphGN1YG/y/hBIfmZtthgSWrYyIYgTLRY6MJEauJ BPRGjiIO6rPrPzatezU9RNqzt8wQCAKgWkkmtQ5m5TD/U1KbSuUHYUpN0CjsEUScf0OO RxPA== X-Gm-Message-State: ALoCoQkXRatzeF8iqMYWJKsEmWLJI2dX8kcGRKq/3RhpEz3qIKJYXKDbYBdSAGgipSQFLmbPx8oT X-Received: by 10.50.32.103 with SMTP id h7mr1232868igi.6.1389706623170; Tue, 14 Jan 2014 05:37:03 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.121.98 with SMTP id lj2ls131610qeb.45.gmail; Tue, 14 Jan 2014 05:37:03 -0800 (PST) X-Received: by 10.221.7.132 with SMTP id oo4mr680442vcb.31.1389706623070; Tue, 14 Jan 2014 05:37:03 -0800 (PST) Received: from mail-ve0-f171.google.com (mail-ve0-f171.google.com [209.85.128.171]) by mx.google.com with ESMTPS id f7si233584vcz.132.2014.01.14.05.37.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 14 Jan 2014 05:37:02 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.171; Received: by mail-ve0-f171.google.com with SMTP id c14so571342vea.2 for ; Tue, 14 Jan 2014 05:37:02 -0800 (PST) X-Received: by 10.220.106.84 with SMTP id w20mr746654vco.18.1389706622805; Tue, 14 Jan 2014 05:37:02 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.59.13.131 with SMTP id ey3csp201418ved; Tue, 14 Jan 2014 05:37:02 -0800 (PST) X-Received: by 10.194.85.75 with SMTP id f11mr27654960wjz.47.1389706621555; Tue, 14 Jan 2014 05:37:01 -0800 (PST) Received: from mail-wi0-f174.google.com (mail-wi0-f174.google.com [209.85.212.174]) by mx.google.com with ESMTPS id fs14si426052wjc.134.2014.01.14.05.37.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 14 Jan 2014 05:37:01 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.174 is neither permitted nor denied by best guess record for domain of julien.grall@linaro.org) client-ip=209.85.212.174; Received: by mail-wi0-f174.google.com with SMTP id g10so2603385wiw.7 for ; Tue, 14 Jan 2014 05:37:01 -0800 (PST) X-Received: by 10.180.36.51 with SMTP id n19mr3000196wij.48.1389706621014; Tue, 14 Jan 2014 05:37:01 -0800 (PST) Received: from belegaer.uk.xensource.com. ([185.25.64.249]) by mx.google.com with ESMTPSA id k10sm552023wjf.11.2014.01.14.05.36.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jan 2014 05:36:59 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, ian.campbell@citrix.com, tim@xen.org, stefano.stabellini@citrix.com, Julien Grall Subject: [PATCH v3] xen/arm: p2m: Correctly flush TLB in create_p2m_entries Date: Tue, 14 Jan 2014 13:36:55 +0000 Message-Id: <1389706615-9578-1-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The p2m is shared between VCPUs for each domain. Currently Xen only flush TLB on the local PCPU. This could result to mismatch between the mapping in the p2m and TLBs. Flush TLB entries used by this domain on every PCPU. The flush can also be moved out of the loop because: - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called - INSERT: if valid = 1 that would means with have replaced a page that already belongs to the domain. A VCPU can write on the wrong page. This can happen for dom0 with the 1:1 mapping because the mapping is not removed from the p2m. - REMOVE: except for grant-table (replace_grant_host_mapping), each call to guest_physmap_remove_page are protected by the callers via a get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So the page can't be allocated for another domain until the last put_page. - RELINQUISH : the domain is not running anymore so we don't care... Also avoid leaking a foreign page if the function is INSERTed a new mapping on top of foreign mapping. Signed-off-by: Julien Grall Acked-by: Ian Campbell --- Changes in v3: - Add an ASSERT in ALLOCATE - Fix typo in commit message - Move put_page above the switch to avoid leaking foreign page when a page is replaced. Changes in v2: - Switch to the domain for only flush its TLBs entries - Move the flush out of the loop This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the flush out of the loop which should be safe (see why in the commit message). Without this patch, the guest can have stale TLB entries when the VCPU is moved to another PCPU. Except grant-table (I can't find {get,put}_page for grant-table code???), all the callers are protected by a get_page before removing the page. So if the another VCPU is trying to access to this page before the flush, it will just read/write the wrong page. The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This should be safe because create_p2m_entries only deal with a specific domain. I don't think I forget case in this function. Let me know if it's the case. --- xen/arch/arm/p2m.c | 56 +++++++++++++++++++++++++++++++++++----------------- 1 file changed, 38 insertions(+), 18 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 11f4714..85ca330 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d, int mattr, p2m_type_t t) { - int rc, flush; + int rc; struct p2m_domain *p2m = &d->arch.p2m; lpae_t *first = NULL, *second = NULL, *third = NULL; paddr_t addr; @@ -246,10 +246,15 @@ static int create_p2m_entries(struct domain *d, cur_first_offset = ~0, cur_second_offset = ~0; unsigned long count = 0; + unsigned int flush = 0; bool_t populate = (op == INSERT || op == ALLOCATE); + lpae_t pte; spin_lock(&p2m->lock); + if ( d != current->domain ) + p2m_load_VTTBR(d); + addr = start_gpaddr; while ( addr < end_gpaddr ) { @@ -316,15 +321,31 @@ static int create_p2m_entries(struct domain *d, cur_second_offset = second_table_offset(addr); } - flush = third[third_table_offset(addr)].p2m.valid; + pte = third[third_table_offset(addr)]; + + flush |= pte.p2m.valid; + + /* TODO: Handle other p2m type + * + * It's safe to do the put_page here because page_alloc will + * flush the TLBs if the page is reallocated before the end of + * this loop. + */ + if ( pte.p2m.valid && p2m_is_foreign(pte.p2m.type) ) + { + unsigned long mfn = pte.p2m.base; + + ASSERT(mfn_valid(mfn)); + put_page(mfn_to_page(mfn)); + } /* Allocate a new RAM page and attach */ switch (op) { case ALLOCATE: { struct page_info *page; - lpae_t pte; + ASSERT(!pte.p2m.valid); rc = -ENOMEM; page = alloc_domheap_page(d, 0); if ( page == NULL ) { @@ -339,8 +360,7 @@ static int create_p2m_entries(struct domain *d, break; case INSERT: { - lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, - mattr, t); + pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t); write_pte(&third[third_table_offset(addr)], pte); maddr += PAGE_SIZE; } @@ -348,9 +368,6 @@ static int create_p2m_entries(struct domain *d, case RELINQUISH: case REMOVE: { - lpae_t pte = third[third_table_offset(addr)]; - unsigned long mfn = pte.p2m.base; - if ( !pte.p2m.valid ) { count++; @@ -359,13 +376,6 @@ static int create_p2m_entries(struct domain *d, count += 0x10; - /* TODO: Handle other p2m type */ - if ( p2m_is_foreign(pte.p2m.type) ) - { - ASSERT(mfn_valid(mfn)); - put_page(mfn_to_page(mfn)); - } - memset(&pte, 0x00, sizeof(pte)); write_pte(&third[third_table_offset(addr)], pte); count++; @@ -373,9 +383,6 @@ static int create_p2m_entries(struct domain *d, break; } - if ( flush ) - flush_tlb_all_local(); - /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */ if ( op == RELINQUISH && count >= 0x2000 ) { @@ -392,6 +399,16 @@ static int create_p2m_entries(struct domain *d, addr += PAGE_SIZE; } + if ( flush ) + { + /* At the beginning of the function, Xen is updating VTTBR + * with the domain where the mappings are created. In this + * case it's only necessary to flush TLBs on every CPUs with + * the current VMID (our domain). + */ + flush_tlb(); + } + if ( op == ALLOCATE || op == INSERT ) { unsigned long sgfn = paddr_to_pfn(start_gpaddr); @@ -409,6 +426,9 @@ out: if (second) unmap_domain_page(second); if (first) unmap_domain_page(first); + if ( d != current->domain ) + p2m_load_VTTBR(current->domain); + spin_unlock(&p2m->lock); return rc;