From patchwork Thu Oct 29 17:49:42 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 55794 Delivered-To: patches@linaro.org Received: by 10.112.61.134 with SMTP id p6csp701680lbr; Thu, 29 Oct 2015 10:50:05 -0700 (PDT) X-Received: by 10.28.144.143 with SMTP id s137mr8883393wmd.26.1446141004101; Thu, 29 Oct 2015 10:50:04 -0700 (PDT) Return-Path: Received: from mail-wm0-x230.google.com (mail-wm0-x230.google.com. [2a00:1450:400c:c09::230]) by mx.google.com with ESMTPS id cg3si3428399wjb.178.2015.10.29.10.50.04 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Oct 2015 10:50:04 -0700 (PDT) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c09::230 as permitted sender) client-ip=2a00:1450:400c:c09::230; Authentication-Results: mx.google.com; spf=pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c09::230 as permitted sender) smtp.mailfrom=eric.auger@linaro.org; dkim=pass header.i=@linaro_org.20150623.gappssmtp.com Received: by wmll128 with SMTP id l128so29404789wml.0 for ; Thu, 29 Oct 2015 10:50:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=l9fCgen4O+UlLsnjybdBTkJUZXfkowK3huhSXo7sKYM=; b=UoZO/Xvw8ltc7i0XM00yHaB5lFftNrS1jJjjs8GkWSQwejVeiMSFKsL+lqh5RayHdP Wx1oL3gDh70eUmbCINC2bppw7xeAYPsJKS+eHarnXbZvzkTH0Ajug3zNcdAmQfRkI9it qGNopY40X0Ii/jRWK6uYUROvQ0lQJVr5TjbOsNvrbiqzgtPNjmABrVKMiNbYPTIxo31f TkQmfPbOvqLcOLwUh7VcfNbEqQ+sE9TdyHV4abM0IgTIx1943WQfnDJT4ng5dGS2Uaw1 ZQI6blh0ynKrUV1UAK4xLJw8IkL9YVqEgIWC0R5EebJERvCBc9iwi3ncVA9Ll3HVKHIW Re7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=l9fCgen4O+UlLsnjybdBTkJUZXfkowK3huhSXo7sKYM=; b=mv8n+wIgeOrfMbKmpYk1HMooZqJ/yrWh7MMr7M2JW+a1wb7uw137AyP8bvtaynB5Em wB+/i4h5VMkAT132Z70rrC5aJMtwrjHAcC9UxFEbM7ZyiYUYtZVcWy33NP61QMuPtL0/ 7qUrZ32YsppgVkxKNMRwgqarK1VlUD5JH/vPXiilfZz24TiPuvdnR2J04up4B1Q9Hwbk kyreOl9++LmUtk612rWqRuNHILdiVI5/aucENA6kmfD/M2O4QYHKdpv4XH0ILHZkFn7D PRy1PApMGuXO5hh23B69kpkvfOzZgt53Goq1xxfkuaUvEFcn/MlkUGF/b+mo9dSrPguE Of0A== X-Gm-Message-State: ALoCoQlk+sdK5L/Y/4TasjdvSE+oMquJ09325D0kk1P5eWLoo+gB/MTAJ0WLpgELjfpzF/2taeBJ X-Received: by 10.28.174.139 with SMTP id x133mr8847895wme.100.1446141003852; Thu, 29 Oct 2015 10:50:03 -0700 (PDT) Return-Path: Received: from new-host-3.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m143sm4629776wmb.1.2015.10.29.10.50.02 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Oct 2015 10:50:03 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, will.deacon@arm.com Cc: suravee.suthikulpanit@amd.com, christoffer.dall@linaro.org, linux-kernel@vger.kernel.org, patches@linaro.org Subject: [PATCH v2] vfio/type1: handle case where IOMMU does not support PAGE_SIZE size Date: Thu, 29 Oct 2015 17:49:42 +0000 Message-Id: <1446140982-4059-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 Current vfio_pgsize_bitmap code hides the supported IOMMU page sizes smaller than PAGE_SIZE. As a result, in case the IOMMU does not support PAGE_SIZE page, the alignment check on map/unmap is done with larger page sizes, if any. This can fail although mapping could be done with pages smaller than PAGE_SIZE. This patch modifies vfio_pgsize_bitmap implementation so that, in case the IOMMU supports page sizes smaller than PAGE_SIZE we pretend PAGE_SIZE is supported and hide sub-PAGE_SIZE sizes. That way the user will be able to map/unmap buffers whose size/ start address is aligned with PAGE_SIZE. Pinning code uses that granularity while iommu driver can use the sub-PAGE_SIZE size to map the buffer. Signed-off-by: Eric Auger Signed-off-by: Alex Williamson Acked-by: Will Deacon --- This was tested on AMD Seattle with 64kB page host. ARM MMU 401 currently expose 4kB, 2MB and 1GB page support. With a 64kB page host, the map/unmap check is done against 2MB. Some alignment check fail so VFIO_IOMMU_MAP_DMA fail while we could map using 4kB IOMMU page size. v1 -> v2: - correct PAGE_HOST type in comment and commit msg - Add Will's R-b RFC -> PATCH v1: - move all modifications in vfio_pgsize_bitmap following Alex' suggestion to expose a fake PAGE_SIZE support - restore WARN_ON's --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) -- 1.9.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 57d8c37..59d47cb 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -403,13 +403,26 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) { struct vfio_domain *domain; - unsigned long bitmap = PAGE_MASK; + unsigned long bitmap = ULONG_MAX; mutex_lock(&iommu->lock); list_for_each_entry(domain, &iommu->domain_list, next) bitmap &= domain->domain->ops->pgsize_bitmap; mutex_unlock(&iommu->lock); + /* + * In case the IOMMU supports page sizes smaller than PAGE_SIZE + * we pretend PAGE_SIZE is supported and hide sub-PAGE_SIZE sizes. + * That way the user will be able to map/unmap buffers whose size/ + * start address is aligned with PAGE_SIZE. Pinning code uses that + * granularity while iommu driver can use the sub-PAGE_SIZE size + * to map the buffer. + */ + if (bitmap & ~PAGE_MASK) { + bitmap &= PAGE_MASK; + bitmap |= PAGE_SIZE; + } + return bitmap; }