From patchwork Mon May 19 16:23:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 30391 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f71.google.com (mail-pb0-f71.google.com [209.85.160.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6AF24203AB for ; Mon, 19 May 2014 16:25:50 +0000 (UTC) Received: by mail-pb0-f71.google.com with SMTP id jt11sf32270069pbb.2 for ; Mon, 19 May 2014 09:25:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :list-archive:content-type:content-transfer-encoding; bh=ZwbskVtIpfd685yFlLFOKeP6DDChsCRZNdfiDDOrnAw=; b=M5FRmAD4pSn1ai+Q+GqudKmJKJZrtP6fvyL8UetK7IhAFqC1z24cQl+kc4AOgAdBXp RgKDfAipE1+kfdqC4xKh0pWBwLXBUlbFVhfoV7oLprSkCsbt4oi6CZIR0FV3qDYqOR0K oBpoRqubF/B+5gn5/nItsh5H57bVY2590NGZnDQFs5m2/g/dRtzRSVkGlP6pJRjNSYZr SxwiDy2QtXJ5mua0587QcTJmxOZvwTiwlT4bKek6OrDOKSccODlMnPgOtx4XW7TBEZvx 2vDyf5qnTefJBmkMaOx6Myn/G6R9vF83fiyHSNoAdw8j222sP6IrZtxGBmHI7xzJsiyh BwTg== X-Gm-Message-State: ALoCoQkprcZcvIzI+NPjbvxCRDn5JNOV6DsZziaNND8Ipvp+wwBWJqt0fTm+KicUO5h2RREk1075 X-Received: by 10.68.230.193 with SMTP id ta1mr17113600pbc.6.1400516749683; Mon, 19 May 2014 09:25:49 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.50.16 with SMTP id r16ls495237qga.71.gmail; Mon, 19 May 2014 09:25:49 -0700 (PDT) X-Received: by 10.221.4.66 with SMTP id ob2mr15487937vcb.28.1400516749517; Mon, 19 May 2014 09:25:49 -0700 (PDT) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx.google.com with ESMTPS id ex10si4062306vcb.93.2014.05.19.09.25.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 19 May 2014 09:25:49 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.182 as permitted sender) client-ip=209.85.220.182; Received: by mail-vc0-f182.google.com with SMTP id la4so9853782vcb.13 for ; Mon, 19 May 2014 09:25:49 -0700 (PDT) X-Received: by 10.53.13.133 with SMTP id ey5mr13137861vdd.8.1400516749420; Mon, 19 May 2014 09:25:49 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp247560vcb; Mon, 19 May 2014 09:25:49 -0700 (PDT) X-Received: by 10.140.34.228 with SMTP id l91mr48841693qgl.85.1400516747323; Mon, 19 May 2014 09:25:47 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id v10si8935659qat.93.2014.05.19.09.25.46 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 19 May 2014 09:25:47 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmQME-0000mx-LM; Mon, 19 May 2014 16:24:10 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WmQMC-0000mL-6D for xen-devel@lists.xenproject.org; Mon, 19 May 2014 16:24:08 +0000 Received: from [85.158.143.35:16305] by server-3.bemta-4.messagelabs.com id 3C/3F-13602-7203A735; Mon, 19 May 2014 16:24:07 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-12.tower-21.messagelabs.com!1400516646!5880459!1 X-Originating-IP: [74.125.83.49] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22608 invoked from network); 19 May 2014 16:24:06 -0000 Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com) (74.125.83.49) by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 19 May 2014 16:24:06 -0000 Received: by mail-ee0-f49.google.com with SMTP id e53so3774410eek.8 for ; Mon, 19 May 2014 09:24:06 -0700 (PDT) X-Received: by 10.15.98.68 with SMTP id bi44mr5615535eeb.97.1400516646388; Mon, 19 May 2014 09:24:06 -0700 (PDT) Received: from belegaer.uk.xensource.com ([185.25.64.249]) by mx.google.com with ESMTPSA id p9sm42954653eeg.32.2014.05.19.09.24.04 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 May 2014 09:24:04 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Mon, 19 May 2014 17:23:57 +0100 Message-Id: <1400516640-7175-2-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1400516640-7175-1-git-send-email-julien.grall@linaro.org> References: <1400516640-7175-1-git-send-email-julien.grall@linaro.org> Cc: stefano.stabellini@citrix.com, Julien Grall , tim@xen.org, ian.campbell@citrix.com, Jan Beulich Subject: [Xen-devel] [PATCH v8 1/4] xen/arm: p2m: Clean cache PT when the IOMMU doesn't support coherent walk X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: Some IOMMU don't suppport coherent PT walk. When the p2m is shared with the CPU, Xen has to make sure the PT changes have reached the memory. Introduce new IOMMU function that will check if the IOMMU feature is enabled for a specified domain. On ARM, the platform can contain multiple IOMMUs. Each of them may not have the same set of feature. The domain parameter will be used to get the set of features for IOMMUs used by this domain. Signed-off-by: Julien Grall Acked-by: Jan Beulich Acked-by: Ian Campbell --- Changes in v8: - Drop final comma in the enum - Add function clear_and_clean_page Changes in v7: - Add IOMMU_FEAT_count - Use DECLARE_BITMAP Changes in v6: - Rework the condition to flush cache for PT - Use {set,clear,test}_bit - Store features in hvm_iommu structure and add accessor - Don't specificed value in the enum Changes in v5: - Flush on every write_pte instead of unmap page. This will avoid to flush a whole page when only few bytes are modified - Only get iommu feature once. - Add bits to flush cache when a new table is created - Fix typoes in commit message and comment - Use an enum to describe the feature. Each items are a bit position Changes in v4: - Patch added --- xen/arch/arm/mm.c | 10 +++++++++ xen/arch/arm/p2m.c | 46 +++++++++++++++++++++++++-------------- xen/drivers/passthrough/iommu.c | 10 +++++++++ xen/include/asm-arm/mm.h | 3 +++ xen/include/xen/hvm/iommu.h | 6 +++++ xen/include/xen/iommu.h | 9 ++++++++ 6 files changed, 68 insertions(+), 16 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index eac228c..7e8e06a 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1235,6 +1235,16 @@ int is_iomem_page(unsigned long mfn) return 1; return 0; } + +void clear_and_clean_page(struct page_info *page) +{ + void *p = __map_domain_page(page); + + clear_page(p); + clean_xen_dcache_va_range(p, PAGE_SIZE); + unmap_domain_page(p); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index b85143b..96bc0ef 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -253,9 +253,15 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr, return e; } +static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache) +{ + write_pte(p, pte); + if ( flush_cache ) + clean_xen_dcache(*p); +} + /* Allocate a new page table page and hook it in via the given entry */ -static int p2m_create_table(struct domain *d, - lpae_t *entry) +static int p2m_create_table(struct domain *d, lpae_t *entry, bool_t flush_cache) { struct p2m_domain *p2m = &d->arch.p2m; struct page_info *page; @@ -272,11 +278,13 @@ static int p2m_create_table(struct domain *d, p = __map_domain_page(page); clear_page(p); + if ( flush_cache ) + clean_xen_dcache_va_range(p, PAGE_SIZE); unmap_domain_page(p); pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid); - write_pte(entry, pte); + p2m_write_pte(entry, pte, flush_cache); return 0; } @@ -308,6 +316,13 @@ static int apply_p2m_changes(struct domain *d, unsigned int flush = 0; bool_t populate = (op == INSERT || op == ALLOCATE); lpae_t pte; + bool_t flush_pt; + + /* Some IOMMU don't support coherent PT walk. When the p2m is + * shared with the CPU, Xen has to make sure that the PT changes have + * reached the memory + */ + flush_pt = iommu_enabled && !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); spin_lock(&p2m->lock); @@ -334,7 +349,8 @@ static int apply_p2m_changes(struct domain *d, continue; } - rc = p2m_create_table(d, &first[first_table_offset(addr)]); + rc = p2m_create_table(d, &first[first_table_offset(addr)], + flush_pt); if ( rc < 0 ) { printk("p2m_populate_ram: L1 failed\n"); @@ -360,7 +376,8 @@ static int apply_p2m_changes(struct domain *d, continue; } - rc = p2m_create_table(d, &second[second_table_offset(addr)]); + rc = p2m_create_table(d, &second[second_table_offset(addr)], + flush_pt); if ( rc < 0 ) { printk("p2m_populate_ram: L2 failed\n"); goto out; @@ -411,13 +428,15 @@ static int apply_p2m_changes(struct domain *d, pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t); - write_pte(&third[third_table_offset(addr)], pte); + p2m_write_pte(&third[third_table_offset(addr)], + pte, flush_pt); } break; case INSERT: { pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t); - write_pte(&third[third_table_offset(addr)], pte); + p2m_write_pte(&third[third_table_offset(addr)], + pte, flush_pt); maddr += PAGE_SIZE; } break; @@ -433,7 +452,8 @@ static int apply_p2m_changes(struct domain *d, count += 0x10; memset(&pte, 0x00, sizeof(pte)); - write_pte(&third[third_table_offset(addr)], pte); + p2m_write_pte(&third[third_table_offset(addr)], + pte, flush_pt); count++; } break; @@ -537,7 +557,6 @@ int p2m_alloc_table(struct domain *d) { struct p2m_domain *p2m = &d->arch.p2m; struct page_info *page; - void *p; page = alloc_domheap_pages(NULL, P2M_FIRST_ORDER, 0); if ( page == NULL ) @@ -546,13 +565,8 @@ int p2m_alloc_table(struct domain *d) spin_lock(&p2m->lock); /* Clear both first level pages */ - p = __map_domain_page(page); - clear_page(p); - unmap_domain_page(p); - - p = __map_domain_page(page + 1); - clear_page(p); - unmap_domain_page(p); + clear_and_clean_page(page); + clear_and_clean_page(page + 1); p2m->first_level = page; diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 59f1c3e..cc12735 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -344,6 +344,16 @@ void iommu_crash_shutdown(void) iommu_enabled = iommu_intremap = 0; } +bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature) +{ + const struct hvm_iommu *hd = domain_hvm_iommu(d); + + if ( !iommu_enabled ) + return 0; + + return test_bit(feature, hd->features); +} + static void iommu_dump_p2m_table(unsigned char key) { struct domain *d; diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index b8d4e7d..3bef93f 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -5,6 +5,7 @@ #include #include #include +#include /* Align Xen to a 2 MiB boundary. */ #define XEN_PADDR_ALIGN (1 << 21) @@ -341,6 +342,8 @@ static inline void put_page_and_type(struct page_info *page) put_page(page); } +void clear_and_clean_page(struct page_info *page); + #endif /* __ARCH_ARM_MM__ */ /* * Local variables: diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h index 1259e16..693346c 100644 --- a/xen/include/xen/hvm/iommu.h +++ b/xen/include/xen/hvm/iommu.h @@ -34,6 +34,12 @@ struct hvm_iommu { /* List of DT devices assigned to this domain */ struct list_head dt_devices; #endif + + /* Features supported by the IOMMU */ + DECLARE_BITMAP(features, IOMMU_FEAT_count); }; +#define iommu_set_feature(d, f) set_bit((f), domain_hvm_iommu(d)->features) +#define iommu_clear_feature(d, f) clear_bit((f), domain_hvm_iommu(d)->features) + #endif /* __XEN_HVM_IOMMU_H__ */ diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index b7481dac..2ec7834 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -67,6 +67,15 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, unsigned int flags); int iommu_unmap_page(struct domain *d, unsigned long gfn); +enum iommu_feature +{ + IOMMU_FEAT_COHERENT_WALK, + IOMMU_FEAT_count +}; + +bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature); + + #ifdef HAS_PCI void pt_pci_init(void);