From patchwork Thu Jun 20 05:49:44 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi Doyu X-Patchwork-Id: 18000 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D1A9625C6D for ; Thu, 20 Jun 2013 05:50:21 +0000 (UTC) Received: by mail-wi0-f197.google.com with SMTP id hj3sf1454997wib.0 for ; Wed, 19 Jun 2013 22:50:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-beenthere:x-forwarded-to:x-forwarded-for:delivered-to :x-pgp-universal:from:to:date:message-id:x-mailer:in-reply-to :references:mime-version:cc:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:errors-to:sender:x-gm-message-state :x-original-sender:x-original-authentication-results:mailing-list :x-google-group-id:content-type:content-transfer-encoding; bh=STk4UO5Y8R5bqrwyHfWyTNab1pc32HMctnhWArC8aKk=; b=BhBujeJtu8mT4kmxJkIRGBg9SAwWYegNZRtKzZ476TDHwBxENO5Ip7ghCE/VKw49aT mFkc3TrjmRUh53wAfUIuzErWeyhJzq4LwcHmIzDuWUXXeVHj9KW4i6WcSQx5a2DlvZyN uGtLfl4gxsdNQ227mIRFCgUVJIYgGoKT7TqSvxgGWGEeep863Nwhn0yXg2f+VQKLxXWU xMufdblepk1E7Y3BnQwywGakjKQM4onmGnKEyfSUUEfO9l4Hh3lG2/1wSwNqLooK3YF8 8/xwFMFxQfOVEvtZRvTZrrMmAwaDZ1AJSG/DPNbz494pid6T0tvHEf6+lc1DuYp/xRvp yk6w== X-Received: by 10.180.76.76 with SMTP id i12mr2440141wiw.6.1371707420626; Wed, 19 Jun 2013 22:50:20 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.98.233 with SMTP id el9ls49710wib.40.canary; Wed, 19 Jun 2013 22:50:20 -0700 (PDT) X-Received: by 10.194.237.38 with SMTP id uz6mr4541098wjc.73.1371707420426; Wed, 19 Jun 2013 22:50:20 -0700 (PDT) Received: from mail-ve0-x234.google.com (mail-ve0-x234.google.com [2607:f8b0:400c:c01::234]) by mx.google.com with ESMTPS id fu14si4149997wic.34.2013.06.19.22.50.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Jun 2013 22:50:20 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::234 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::234; Received: by mail-ve0-f180.google.com with SMTP id pa12so4713161veb.39 for ; Wed, 19 Jun 2013 22:50:19 -0700 (PDT) X-Received: by 10.220.53.7 with SMTP id k7mr1875822vcg.52.1371707419202; Wed, 19 Jun 2013 22:50:19 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.165.8 with SMTP id yu8csp8239veb; Wed, 19 Jun 2013 22:50:18 -0700 (PDT) X-Received: by 10.194.24.135 with SMTP id u7mr4527752wjf.7.1371707415141; Wed, 19 Jun 2013 22:50:15 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id ww9si11058578wjb.65.2013.06.19.22.50.13 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 19 Jun 2013 22:50:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UpXkv-0002Am-J1; Thu, 20 Jun 2013 05:50:01 +0000 Received: from hqemgate03.nvidia.com ([216.228.121.140]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UpXkt-0002AT-Sw for linaro-mm-sig@lists.linaro.org; Thu, 20 Jun 2013 05:50:00 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate03.nvidia.com id ; Wed, 19 Jun 2013 22:57:10 -0700 Received: from hqemhub01.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Wed, 19 Jun 2013 22:48:22 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Wed, 19 Jun 2013 22:48:22 -0700 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by hqemhub01.nvidia.com (172.20.150.30) with Microsoft SMTP Server id 8.3.298.1; Wed, 19 Jun 2013 22:50:00 -0700 Received: from thelma.nvidia.com (Not Verified[172.16.212.77]) by hqnvemgw01.nvidia.com with MailMarshal (v7,1,2,5326) id ; Wed, 19 Jun 2013 22:49:59 -0700 Received: from oreo.Nvidia.com (dhcp-10-21-26-134.nvidia.com [10.21.26.134]) by thelma.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id r5K5nodd013397; Wed, 19 Jun 2013 22:49:58 -0700 (PDT) From: Hiroshi Doyu To: , Date: Thu, 20 Jun 2013 08:49:44 +0300 Message-ID: <1371707384-30037-4-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.8.1.5 In-Reply-To: <1371707384-30037-1-git-send-email-hdoyu@nvidia.com> References: <1371707384-30037-1-git-send-email-hdoyu@nvidia.com> MIME-Version: 1.0 Cc: linux-tegra@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-arm-kernel@lists.infradead.org Subject: [Linaro-mm-sig] [RFC 3/3] iommu/tegra: smmu: Support read-only mapping X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQnZIv0VbwE2jdjZlJrsI594F7YEU4W04dX13DczyxE3BQ3ZlshP4+wxNw8I7Fr/0r7ztl0n X-Original-Sender: hdoyu@nvidia.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::234 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Support read-only mapping via struct dma_attrs. Signed-off-by: Hiroshi Doyu --- drivers/iommu/tegra-smmu.c | 41 +++++++++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index fab1f19..3aff4cd 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -862,12 +862,13 @@ static size_t __smmu_iommu_unmap_largepage(struct smmu_as *as, dma_addr_t iova) } static int __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, - unsigned long pfn) + unsigned long pfn, int prot) { struct smmu_device *smmu = as->smmu; unsigned long *pte; unsigned int *count; struct page *page; + int attrs = as->pte_attr; pte = locate_pte(as, iova, true, &page, &count); if (WARN_ON(!pte)) @@ -875,7 +876,11 @@ static int __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, if (*pte == _PTE_VACANT(iova)) (*count)++; - *pte = SMMU_PFN_TO_PTE(pfn, as->pte_attr); + + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; + + *pte = SMMU_PFN_TO_PTE(pfn, attrs); FLUSH_CPU_DCACHE(pte, page, sizeof(*pte)); flush_ptc_and_tlb(smmu, as, iova, pte, page, 0); put_signature(as, iova, pfn); @@ -883,23 +888,27 @@ static int __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, } static int __smmu_iommu_map_page(struct smmu_as *as, dma_addr_t iova, - phys_addr_t pa) + phys_addr_t pa, int prot) { unsigned long pfn = __phys_to_pfn(pa); - return __smmu_iommu_map_pfn(as, iova, pfn); + return __smmu_iommu_map_pfn(as, iova, pfn, prot); } static int __smmu_iommu_map_largepage(struct smmu_as *as, dma_addr_t iova, - phys_addr_t pa) + phys_addr_t pa, int prot) { unsigned long pdn = SMMU_ADDR_TO_PDN(iova); unsigned long *pdir = (unsigned long *)page_address(as->pdir_page); + int attrs = _PDE_ATTR; if (pdir[pdn] != _PDE_VACANT(pdn)) return -EINVAL; - pdir[pdn] = SMMU_ADDR_TO_PDN(pa) << 10 | _PDE_ATTR; + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; + + pdir[pdn] = SMMU_ADDR_TO_PDN(pa) << 10 | attrs; FLUSH_CPU_DCACHE(&pdir[pdn], as->pdir_page, sizeof pdir[pdn]); flush_ptc_and_tlb(as->smmu, as, iova, &pdir[pdn], as->pdir_page, 1); @@ -912,7 +921,8 @@ static int smmu_iommu_map(struct iommu_domain *domain, unsigned long iova, struct smmu_as *as = domain->priv; unsigned long flags; int err; - int (*fn)(struct smmu_as *as, dma_addr_t iova, phys_addr_t pa); + int (*fn)(struct smmu_as *as, dma_addr_t iova, phys_addr_t pa, + int prot); dev_dbg(as->smmu->dev, "[%d] %08lx:%08x\n", as->asid, iova, pa); @@ -929,7 +939,7 @@ static int smmu_iommu_map(struct iommu_domain *domain, unsigned long iova, } spin_lock_irqsave(&as->lock, flags); - err = fn(as, iova, pa); + err = fn(as, iova, pa, prot); spin_unlock_irqrestore(&as->lock, flags); return err; } @@ -943,6 +953,10 @@ static int smmu_iommu_map_pages(struct iommu_domain *domain, unsigned long iova, unsigned long *pdir = page_address(as->pdir_page); int err = 0; bool flush_all = (total > SZ_512) ? true : false; + int attrs = as->pte_attr; + + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; spin_lock_irqsave(&as->lock, flags); @@ -977,8 +991,7 @@ static int smmu_iommu_map_pages(struct iommu_domain *domain, unsigned long iova, if (*pte == _PTE_VACANT(iova + i * PAGE_SIZE)) (*rest)++; - *pte = SMMU_PFN_TO_PTE(page_to_pfn(pages[i]), - as->pte_attr); + *pte = SMMU_PFN_TO_PTE(page_to_pfn(pages[i]), attrs); } pte = &ptbl[ptn]; @@ -1010,6 +1023,10 @@ static int smmu_iommu_map_sg(struct iommu_domain *domain, unsigned long iova, bool flush_all = (nents * PAGE_SIZE > SZ_512) ? true : false; struct smmu_as *as = domain->priv; struct smmu_device *smmu = as->smmu; + int attrs = as->pte_attr; + + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; for (count = 0, s = sgl; count < nents; s = sg_next(s)) { phys_addr_t phys = page_to_phys(sg_page(s)); @@ -1053,7 +1070,7 @@ static int smmu_iommu_map_sg(struct iommu_domain *domain, unsigned long iova, (*rest)++; } - *pte = SMMU_PFN_TO_PTE(pfn + i, as->pte_attr); + *pte = SMMU_PFN_TO_PTE(pfn + i, attrs); } pte = &ptbl[ptn]; @@ -1191,7 +1208,7 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, struct page *page; page = as->smmu->avp_vector_page; - __smmu_iommu_map_pfn(as, 0, page_to_pfn(page)); + __smmu_iommu_map_pfn(as, 0, page_to_pfn(page), 0); pr_debug("Reserve \"page zero\" \ for AVP vectors using a common dummy\n");