From patchwork Thu Jul 12 06:18:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 141789 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp1132092ljj; Wed, 11 Jul 2018 23:19:33 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcy4bdOb2KYr6xfY7i7oGZS8Dnmy8I63j/jgaAXGIdpnW2I7h8IvxLv0MvVnJuBxFWsF4t6 X-Received: by 2002:a17:902:8541:: with SMTP id d1-v6mr908227plo.81.1531376373314; Wed, 11 Jul 2018 23:19:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531376373; cv=none; d=google.com; s=arc-20160816; b=UGeifi9VxOykY8MLmpeG2HGWl5pt2oxB9gJ5HnztYwPyeCDNr3MTO+TNC2XCSeu0Ne g7H5ULPAfPZD7axFFe8DfxCcauu4RK/J4uc4HUY+pO+s8kkeLn0f7a3cVny3ly0vdOsV f7xQechu9BhqNH3+xz1RQbIKUvaBHpNNHDzkj9vdwuMjX0fY5etBLJNogXut0SumoQkY BOkQ/nF3XtPDEK6wX1FQs1hI4Qp4Rh9UXaxQyLNH5OYcHOklg//enxiqnbYcjci48lki 350CQ1OvnupC5VoKB7Q5mz/1XNjR60SBzxzRuQdzxQPcRPK+TiRad+3NxAMd/VfOqsvk uyVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=iajp0XZqi3J8ihp9dsR987Kg2T7NautWR9C3S4c+B6A=; b=UvS36uxgCdu/Phc2zjQnssrUlwa9LUbUAw5zmsJFSOG0H3jVH28cYCwLjhNdk0lmzM d88JRbwWfHeD8Xj+A2x+x0iqsgkzeZWzoDrldCrPTOqAQBHPivcYGa87WIHFzkEII83r I45Joo27gYGK5ZQTSilX7NCyFP8d3458FBsIUdj+20KaHASVbFryaVmWn3bOcshTccUT 111VkGhYTqdIv+sKOvK/zvtHtWCeukiRjj9b0QZrFK3d6k27v7P3ZTCmkEY4qXho8e0J AVWGtWBpi2iN9laRyDk+LnB5035+UbBXa8Rw+a3sH25yTkds15m255wafJpsfDukJFmy gDSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x11-v6si8521579pgv.669.2018.07.11.23.19.33; Wed, 11 Jul 2018 23:19:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732425AbeGLG1e (ORCPT + 27 others); Thu, 12 Jul 2018 02:27:34 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:9223 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732191AbeGLG1d (ORCPT ); Thu, 12 Jul 2018 02:27:33 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7AB07ACFC0DCF; Thu, 12 Jul 2018 14:19:15 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.382.0; Thu, 12 Jul 2018 14:19:07 +0800 From: Zhen Lei To: Jean-Philippe Brucker , Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel CC: Zhen Lei Subject: [PATCH v3 6/6] iommu/arm-smmu-v3: add bootup option "iommu_strict_mode" Date: Thu, 12 Jul 2018 14:18:32 +0800 Message-ID: <1531376312-2192-7-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> References: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Because the non-strict mode introduces a vulnerability window, so add a bootup option to make the manager can choose which mode to be used. The default mode is IOMMU_STRICT. Signed-off-by: Zhen Lei --- Documentation/admin-guide/kernel-parameters.txt | 12 ++++++++++ drivers/iommu/arm-smmu-v3.c | 32 ++++++++++++++++++++++--- 2 files changed, 41 insertions(+), 3 deletions(-) -- 1.8.3 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index efc7aa7..0cc80bc 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1720,6 +1720,18 @@ nobypass [PPC/POWERNV] Disable IOMMU bypass, using IOMMU for PCI devices. + iommu_strict_mode= [arm-smmu-v3] + 0 - strict mode + Make sure all related TLBs to be invalidated before the + memory released. + 1 - non-strict mode + Put off TLBs invalidation and release memory first. This mode + introduces a vlunerability window, an untrusted device can + access the reused memory because the TLBs may still valid. + Please take full consideration before choosing this mode. + Note that, VFIO is always use strict mode. + others - strict mode + iommu.passthrough= [ARM64] Configure DMA to bypass the IOMMU by default. Format: { "0" | "1" } diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4a198a0..9b72fc4 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -631,6 +631,24 @@ struct arm_smmu_option_prop { { 0, NULL}, }; +static u32 iommu_strict_mode __read_mostly = IOMMU_STRICT; + +static int __init setup_iommu_strict_mode(char *str) +{ + u32 strict_mode = IOMMU_STRICT; + + get_option(&str, &strict_mode); + if (strict_mode == IOMMU_NON_STRICT) { + iommu_strict_mode = strict_mode; + pr_warn("WARNING: iommu non-strict mode is chose.\n" + "It's good for scatter-gather performance but lacks full isolation\n"); + add_taint(TAINT_WARN, LOCKDEP_STILL_OK); + } + + return 0; +} +early_param("iommu_strict_mode", setup_iommu_strict_mode); + static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset, struct arm_smmu_device *smmu) { @@ -1441,7 +1459,7 @@ static bool arm_smmu_capable(enum iommu_cap cap) case IOMMU_CAP_NOEXEC: return true; case IOMMU_CAP_NON_STRICT: - return true; + return (iommu_strict_mode == IOMMU_NON_STRICT) ? true : false; default: return false; } @@ -1750,6 +1768,14 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) return ret; } +static u32 arm_smmu_strict_mode(struct iommu_domain *domain) +{ + if (iommu_strict_mode == IOMMU_NON_STRICT) + return IOMMU_DOMAIN_STRICT_MODE(domain); + + return IOMMU_STRICT; +} + static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { @@ -1769,7 +1795,7 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, if (!ops) return 0; - return ops->unmap(ops, iova | IOMMU_DOMAIN_STRICT_MODE(domain), size); + return ops->unmap(ops, iova | arm_smmu_strict_mode(domain), size); } static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) @@ -1784,7 +1810,7 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain) { struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; - if (smmu && (IOMMU_DOMAIN_STRICT_MODE(domain) == IOMMU_STRICT)) + if (smmu && (arm_smmu_strict_mode(domain) == IOMMU_STRICT)) __arm_smmu_tlb_sync(smmu); }