From patchwork Fri Jan 8 14:52:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFC73C43381 for ; Fri, 8 Jan 2021 15:03:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80C16239FE for ; Fri, 8 Jan 2021 15:03:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727654AbhAHPDL (ORCPT ); Fri, 8 Jan 2021 10:03:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727629AbhAHPDL (ORCPT ); Fri, 8 Jan 2021 10:03:11 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A88B4C0612FE for ; Fri, 8 Jan 2021 07:02:30 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id 190so8073772wmz.0 for ; Fri, 08 Jan 2021 07:02:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=41h/X6Gu0BzJgRbRM9QQpjc5Z6ZN0hh1kAZAdhlzV9w=; b=RlX7b6g0DkwZc+m95BXjRRb5yhQw1tW3eMWzBGsTi1W2i26GfA5VXP3Uj/nkoCrxkW 9bFOAD2aNI17SaPaol5rAggXSOBjAoeyCwM9HlhYNr+EQ0VD8LNZ7jLs+mYhWLzf15gO 0CNEYGbTvlsAhQt4CCecggdQZwsXkqOIoofNh0Tx53FkT0Musmi6ssFAwsJwWNRb75lY vV1qnqMnyY/YgvqETaA6tuvmbBl9VoisceSiPO4qOsHA1yqHP3bsH6ikQixKlZsF8c+V 8B+WHadIiAk36NLpVS2bB2R727x+xOgIyC+eaCnMevGTm6s8CWTM7zK0WtOwuhZFGSVi QAGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=41h/X6Gu0BzJgRbRM9QQpjc5Z6ZN0hh1kAZAdhlzV9w=; b=noC+f+e07eLZEpCUj2v3O4hXPEiCKBi4oVx+9q/6cyFNn4JWk6VrpZbhC/48y+9EqS 8vN66ZTOX1LvEernPRkNunBJeH9yrehc1Jkp5o5q94etAxZDF6RCBk88YERzE/wqiJ1R qmlkRA4yw8IXqOX8GjdIiO6X8/OKMHEhbIk0XiFD+OWcTdaaGo8OlBBgsPPmdvwQ9/dC AzAwRXJlDWOV3VmO/qh3lZZ8Uzra2mhiTUFmrjVcYjxWeFNSAxgRKldCFMo8E6VfNubw eCn+xu2ByhXFamYK4pO3vq0xCbFZcYPLZ+dvRqdmx3oYbS59V5hAxuAnOZRj1F0l2o5B P4ew== X-Gm-Message-State: AOAM532iXA/rJw0DVJERExkBOvOjaGqQ15ewTQHCrvjc23IkjjpR6Klo IMSMlAwUGMtwBjvAFWHBC8RLXQ== X-Google-Smtp-Source: ABdhPJyvZ75UW2eHQKBLoStIviM99D9cMjiVtZ4n9hckO7Tk6QsCg3Df2ywgT7Ej7mDpRaNF07oKYQ== X-Received: by 2002:a1c:6446:: with SMTP id y67mr3405367wmb.144.1610118149405; Fri, 08 Jan 2021 07:02:29 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:28 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker Subject: [PATCH v9 01/10] iommu: Remove obsolete comment Date: Fri, 8 Jan 2021 15:52:09 +0100 Message-Id: <20210108145217.2254447-2-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Commit 986d5ecc5699 ("iommu: Move fwspec->iommu_priv to struct dev_iommu") removed iommu_priv from fwspec. Update the struct doc. Signed-off-by: Jean-Philippe Brucker Acked-by: Jonathan Cameron --- include/linux/iommu.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index b3f0e2018c62..26bcde5e7746 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -570,7 +570,6 @@ struct iommu_group *fsl_mc_device_group(struct device *dev); * struct iommu_fwspec - per-device IOMMU instance data * @ops: ops for this device's IOMMU * @iommu_fwnode: firmware handle for this device's IOMMU - * @iommu_priv: IOMMU driver private data for this device * @num_pasid_bits: number of PASID bits supported by this device * @num_ids: number of associated device IDs * @ids: IDs which this device may present to the IOMMU From patchwork Fri Jan 8 14:52:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7004BC433E6 for ; Fri, 8 Jan 2021 15:03:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 32E8023998 for ; Fri, 8 Jan 2021 15:03:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727067AbhAHPDg (ORCPT ); Fri, 8 Jan 2021 10:03:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725793AbhAHPDg (ORCPT ); Fri, 8 Jan 2021 10:03:36 -0500 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C59BC0612FF for ; Fri, 8 Jan 2021 07:02:32 -0800 (PST) Received: by mail-wr1-x433.google.com with SMTP id q18so9348300wrn.1 for ; Fri, 08 Jan 2021 07:02:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1derpUEbTnUCM4pdAeRyThDvUrcl7Y1I17djZN2CXEA=; b=EdJl9XQ5F1ey1gL2/1WvftdOw37yPekSWCO7fNK1NLXGdtG0TRVmRjn21vGztVrnQp Ym4PcMW/9QPGPvgSa+APqcy34DgALIA9UY0cR1XA1MgcaUmrgVFX4BB/oB/mPIBhgSal 6uzP1grIpiJ9vK42gZG70OBS4K/BiUzfWKko3e9BKX4oxq+dou8E0H+e7RfGmCuN4JBk RIJd6eFitAuDd6FzTdO3LLKac0iKiu77CcThFaijA8roPDWw958o1YbOnmCD+WwGQ13W LsukSaouYZgivUdp4iQwXRCZPyYv+rm0oEYyYnL5da6hPrDPKtPHIngVAlSI63mWZ4nv oV4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1derpUEbTnUCM4pdAeRyThDvUrcl7Y1I17djZN2CXEA=; b=oxRp4nsd8k2zm40HUfPEJt9W1/MZAJ/amwVEjm5R8mZzVN1RW4OaV5HvEoC7rIrFO+ QeGVs2qU/TUIwoa/1XlbTRlQsjB28UJNSvfsV8oPbRSqcxHL2p5OOFCmTkrVNDhQr/cf PdbRWk0eFMgJ3j9RsjGD1P4d6qwZE4BhmrP0UCz0kY5N3gbSfXxxxjhFHYkxE6NfXVkx PCXW5uix9AExmOM9A2Ab+L5plIUt0MVe56ixGB/K1NoCAp41FZi0KgJHw/0sXKz0Nc0+ 8eqexHGnZqICD/ZUupC9hsdpm7LrnrYee0/YbWCfA4o+YZcWjrzXMi4+RFiBzTW3RDac PCrQ== X-Gm-Message-State: AOAM531qv7fozs3C00U0JNNkKCmR3L2mltovHoyJUGNtwCf5t6P0v6sp C/3lrGZIt+zlJ/vHSeo+g7eN0w== X-Google-Smtp-Source: ABdhPJw+Gfi7cvkCsRObguyk13KxLpGoVIkXzdtADcC08qLi/L50fMR1aX17YNg6qtedaV/SyDcJdA== X-Received: by 2002:adf:9b98:: with SMTP id d24mr4097725wrc.240.1610118150769; Fri, 08 Jan 2021 07:02:30 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:30 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker Subject: [PATCH v9 02/10] iommu/arm-smmu-v3: Use device properties for pasid-num-bits Date: Fri, 8 Jan 2021 15:52:10 +0100 Message-Id: <20210108145217.2254447-3-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: Robin Murphy Signed-off-by: Jean-Philippe Brucker Acked-by: Jonathan Cameron --- include/linux/iommu.h | 2 -- drivers/acpi/arm64/iort.c | 13 +++++++------ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 3 ++- drivers/iommu/of_iommu.c | 5 ----- 4 files changed, 9 insertions(+), 14 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 26bcde5e7746..583c734b2e87 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -570,7 +570,6 @@ struct iommu_group *fsl_mc_device_group(struct device *dev); * struct iommu_fwspec - per-device IOMMU instance data * @ops: ops for this device's IOMMU * @iommu_fwnode: firmware handle for this device's IOMMU - * @num_pasid_bits: number of PASID bits supported by this device * @num_ids: number of associated device IDs * @ids: IDs which this device may present to the IOMMU */ @@ -578,7 +577,6 @@ struct iommu_fwspec { const struct iommu_ops *ops; struct fwnode_handle *iommu_fwnode; u32 flags; - u32 num_pasid_bits; unsigned int num_ids; u32 ids[]; }; diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index d4eac6d7e9fb..c9a8bbb74b09 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -968,15 +968,16 @@ static int iort_pci_iommu_init(struct pci_dev *pdev, u16 alias, void *data) static void iort_named_component_init(struct device *dev, struct acpi_iort_node *node) { + struct property_entry props[2] = {}; struct acpi_iort_named_component *nc; - struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); - - if (!fwspec) - return; nc = (struct acpi_iort_named_component *)node->node_data; - fwspec->num_pasid_bits = FIELD_GET(ACPI_IORT_NC_PASID_BITS, - nc->node_flags); + props[0] = PROPERTY_ENTRY_U32("pasid-num-bits", + FIELD_GET(ACPI_IORT_NC_PASID_BITS, + nc->node_flags)); + + if (device_add_properties(dev, props)) + dev_warn(dev, "Could not add device properties\n"); } static int iort_nc_iommu_map(struct device *dev, struct acpi_iort_node *node) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 8ca7415d785d..6a53b4edf054 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2366,7 +2366,8 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) } } - master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits); + device_property_read_u32(dev, "pasid-num-bits", &master->ssid_bits); + master->ssid_bits = min(smmu->ssid_bits, master->ssid_bits); /* * Note that PASID must be enabled before, and disabled after ATS: diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c index e505b9130a1c..a9d2df001149 100644 --- a/drivers/iommu/of_iommu.c +++ b/drivers/iommu/of_iommu.c @@ -210,11 +210,6 @@ const struct iommu_ops *of_iommu_configure(struct device *dev, of_pci_iommu_init, &info); } else { err = of_iommu_configure_device(master_np, dev, id); - - fwspec = dev_iommu_fwspec_get(dev); - if (!err && fwspec) - of_property_read_u32(master_np, "pasid-num-bits", - &fwspec->num_pasid_bits); } /* From patchwork Fri Jan 8 14:52:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D41C433DB for ; Fri, 8 Jan 2021 15:03:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5FC9923A03 for ; Fri, 8 Jan 2021 15:03:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725793AbhAHPDu (ORCPT ); Fri, 8 Jan 2021 10:03:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726251AbhAHPDt (ORCPT ); Fri, 8 Jan 2021 10:03:49 -0500 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73E2CC06129E for ; Fri, 8 Jan 2021 07:02:33 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id 3so8666314wmg.4 for ; Fri, 08 Jan 2021 07:02:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uvwNHFhdLkaGBie88gzRCcyxfKIG/LYZ2PnvLPGSu2c=; b=ISNeho54rmu9ZyTuqnuYWjhj+2mILy1TMBGrvs7XzZBRVbUOjGi1bOIxR1vq9L5T/8 hACv21cvz6C71l/cwPP5kPHWLxhGTpq32HVVUmyFMxuPuq5CR8U0R0Wo/te2LrbO7RBs W144EpMmuqZtpFpBJDVA0rFb+nydLeQLKFNQUvZxLKMfgAeaaBn73IPU+bdDMdLLrLmM ESlBzKA0NUJUQ5ZRZ814zmJdf0mjDHRnUXHD22TVzxEIFrd0rvhS3y02u9CyXLaLQ/kD AoZFQoQi8qql0RIgbg4vTAOQSj97X/8XrSet3WM6HG0qznl4idxYu7ZZyjBhX5oxfAhK kmtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uvwNHFhdLkaGBie88gzRCcyxfKIG/LYZ2PnvLPGSu2c=; b=Os2Knj29YoJ6w6RVJbK7sZHmp63c/pI6q2XgTtwuukAw8vhWYNoTbvSmvnlCs0sCpm 3YzyFPLsaORZAmctVCdYFoOZmscfTBXUcjr0SqMxjBBjp12xwmRz1utVhWMCBkYBrQlC RdBqlanlKuu4yI+nT1ThjW+mbT2iXWnHpASpbRvxtJYlKrpr3goOnzfKUgmiSPurv7Kw TdyU3pU2kRnlvQBFbn+mnWakTziROdd5HbLXxzu6ApY9XIcjETTiX6dSidIA/AGvEOr9 lHtuHobZPhzd8fJJ1hk+OWVTyv35hYca+5EBF5/bBaX9eegaCbUsiYFNrANyLMZTXgwY i4qg== X-Gm-Message-State: AOAM530+bmCbhLM+Cr80QHYWle4NTObCuJ7L61LfCZMZWqOScIWk6MI7 fNj3wiFONsJWCK9hhc/uH2dQgQ== X-Google-Smtp-Source: ABdhPJymvMLKfx6JMwWQypvMx10zrT/uAgWw60V8JEcHGqofEYBbCwP/8ch5rpewZaM7KRA+nDGSgA== X-Received: by 2002:a05:600c:2042:: with SMTP id p2mr3504371wmg.152.1610118152176; Fri, 08 Jan 2021 07:02:32 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:31 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker , Arnd Bergmann , David Woodhouse , Greg Kroah-Hartman , Zhou Wang Subject: [PATCH v9 03/10] iommu: Separate IOMMU_DEV_FEAT_IOPF from IOMMU_DEV_FEAT_SVA Date: Fri, 8 Jan 2021 15:52:11 +0100 Message-Id: <20210108145217.2254447-4-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Some devices manage I/O Page Faults (IOPF) themselves instead of relying on PCIe PRI or Arm SMMU stall. Allow their drivers to enable SVA without mandating IOMMU-managed IOPF. The other device drivers now need to first enable IOMMU_DEV_FEAT_IOPF before enabling IOMMU_DEV_FEAT_SVA. Signed-off-by: Jean-Philippe Brucker --- Cc: Arnd Bergmann Cc: David Woodhouse Cc: Greg Kroah-Hartman Cc: Joerg Roedel Cc: Lu Baolu Cc: Will Deacon Cc: Zhangfei Gao Cc: Zhou Wang --- include/linux/iommu.h | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 583c734b2e87..701b2eeb0dc5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -156,10 +156,24 @@ struct iommu_resv_region { enum iommu_resv_type type; }; -/* Per device IOMMU features */ +/** + * enum iommu_dev_features - Per device IOMMU features + * @IOMMU_DEV_FEAT_AUX: Auxiliary domain feature + * @IOMMU_DEV_FEAT_SVA: Shared Virtual Addresses + * @IOMMU_DEV_FEAT_IOPF: I/O Page Faults such as PRI or Stall. Generally using + * %IOMMU_DEV_FEAT_SVA requires %IOMMU_DEV_FEAT_IOPF, but + * some devices manage I/O Page Faults themselves instead + * of relying on the IOMMU. When supported, this feature + * must be enabled before and disabled after + * %IOMMU_DEV_FEAT_SVA. + * + * Device drivers query whether a feature is supported using + * iommu_dev_has_feature(), and enable it using iommu_dev_enable_feature(). + */ enum iommu_dev_features { - IOMMU_DEV_FEAT_AUX, /* Aux-domain feature */ - IOMMU_DEV_FEAT_SVA, /* Shared Virtual Addresses */ + IOMMU_DEV_FEAT_AUX, + IOMMU_DEV_FEAT_SVA, + IOMMU_DEV_FEAT_IOPF, }; #define IOMMU_PASID_INVALID (-1U) From patchwork Fri Jan 8 14:52:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2581DC433DB for ; Fri, 8 Jan 2021 15:04:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D433823A04 for ; Fri, 8 Jan 2021 15:04:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727450AbhAHPEG (ORCPT ); Fri, 8 Jan 2021 10:04:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727380AbhAHPEF (ORCPT ); Fri, 8 Jan 2021 10:04:05 -0500 Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com [IPv6:2a00:1450:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49A74C0612B0 for ; Fri, 8 Jan 2021 07:02:47 -0800 (PST) Received: by mail-wr1-x429.google.com with SMTP id r3so9332316wrt.2 for ; Fri, 08 Jan 2021 07:02:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8RN8kwidJOzPyA7Qpg3RtqPE4flzC/x9Wg3rMV7wees=; b=Fak3o+zy1duaz7jSvXKzMTPcG81rNMh00UGR6hiDG1yuhxLSQ0h3vd3WpSV7arh3RV qJe6332Hy8nYJaxBYNayIHQD5Xkd1eCzyPT4ZxRBMRVpXG3kj2rAb7VRFuzCuNVwD5bD cXoaGCab5mUM9vZfl6eJvNpd+XcMEU7m/GaWL7uRAx5jG2eTnBEVtYCjOnifCBfK8kY1 77Tq+/+ndHgQ2WC8bjQk1SxdJe+kBe4xMcJHNbXFOaNsS2cWnI6xlGWAGtZXcYtKnW2D Wl0BJiddYbKSNl5cZ5IQpZuuSAZzPnrI0FEXytcHQScVZeC6b+prft5hhEZyOvBJ0e+t no7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8RN8kwidJOzPyA7Qpg3RtqPE4flzC/x9Wg3rMV7wees=; b=H3+t5KoTUJRCrDI2cmx2b2iYUc8qh38pFM3UAcnDCmKs8DUHg7X/AOiYhZMryBnAzN +Eqx4lRa+ztgqp4mQNVMBgtJncGlwNY43MEqVZrO1U/LfBONsUxkvHAZsfU94+ANhftx /2UEu0ZCP9CniOUdjFtjGJGIud6uV3sPKiW/GAV8rGnCm4L5bxzfdOWWuBY9fWVKt9lY 29Mz+n5rJRdQE4gBQCsL089IME6v0iIDksoaGoCPPfEnduSsKgSUXgJ29C7sj/p7jtDk dC/n8TvJfe5jFPi2eketRr83qn3d9MK6vEjQ/3URgWKn06dmcAHevnRAcuhaFwLS4Qm4 2CCA== X-Gm-Message-State: AOAM530aLYno24P3WTI5+O5Alqnd9Ob3Ewx37xTHFHbRQPWVy+L/paNU K1aD1Wgtdtrq2JsiJKzL1V0UmQ== X-Google-Smtp-Source: ABdhPJzmCME7Gtvq9iiLZjWDDsj68d4w8hOf5Ni1fB49vFhOqLmIi+oPyKBJ5yMjjTfIAjCKE04XBA== X-Received: by 2002:a5d:690d:: with SMTP id t13mr4077051wru.410.1610118153487; Fri, 08 Jan 2021 07:02:33 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:32 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker , David Woodhouse Subject: [PATCH v9 04/10] iommu/vt-d: Support IOMMU_DEV_FEAT_IOPF Date: Fri, 8 Jan 2021 15:52:12 +0100 Message-Id: <20210108145217.2254447-5-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Allow drivers to query and enable IOMMU_DEV_FEAT_IOPF, which amounts to checking whether PRI is enabled. Signed-off-by: Jean-Philippe Brucker --- Cc: David Woodhouse Cc: Lu Baolu --- drivers/iommu/intel/iommu.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 788119c5b021..630639c753f9 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -5263,6 +5263,8 @@ static int siov_find_pci_dvsec(struct pci_dev *pdev) static bool intel_iommu_dev_has_feat(struct device *dev, enum iommu_dev_features feat) { + struct device_domain_info *info = get_domain_info(dev); + if (feat == IOMMU_DEV_FEAT_AUX) { int ret; @@ -5277,13 +5279,13 @@ intel_iommu_dev_has_feat(struct device *dev, enum iommu_dev_features feat) return !!siov_find_pci_dvsec(to_pci_dev(dev)); } - if (feat == IOMMU_DEV_FEAT_SVA) { - struct device_domain_info *info = get_domain_info(dev); + if (feat == IOMMU_DEV_FEAT_IOPF) + return info && info->pri_supported; + if (feat == IOMMU_DEV_FEAT_SVA) return info && (info->iommu->flags & VTD_FLAG_SVM_CAPABLE) && info->pasid_supported && info->pri_supported && info->ats_supported; - } return false; } @@ -5294,6 +5296,9 @@ intel_iommu_dev_enable_feat(struct device *dev, enum iommu_dev_features feat) if (feat == IOMMU_DEV_FEAT_AUX) return intel_iommu_enable_auxd(dev); + if (feat == IOMMU_DEV_FEAT_IOPF) + return intel_iommu_dev_has_feat(dev, feat) ? 0 : -ENODEV; + if (feat == IOMMU_DEV_FEAT_SVA) { struct device_domain_info *info = get_domain_info(dev); From patchwork Fri Jan 8 14:52:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1906CC433E6 for ; Fri, 8 Jan 2021 15:03:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1A2223A03 for ; Fri, 8 Jan 2021 15:03:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727242AbhAHPDu (ORCPT ); Fri, 8 Jan 2021 10:03:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726735AbhAHPDt (ORCPT ); Fri, 8 Jan 2021 10:03:49 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41230C06129F for ; Fri, 8 Jan 2021 07:02:36 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id v14so8060391wml.1 for ; Fri, 08 Jan 2021 07:02:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6eMatyaU8zFd8Q8TvQc5SNMWc7sFi+EMVDUCvYNO8HI=; b=PF+LtOue9z0k3+oGC/sV5vRzeVgpjnE66dv2lS1/0KAh6PdcDcb/eiy37cF0YSw7tR XamqKOvs/tvWlbeQMist5W2NOqYEP+DQrOLDcyMMWBGWnTGIbB7jRFkZYp2PJUmaAk5X 9ZNyT4OrLBxIV8rTB1uhyk4ubLRlzCNYE9TGknq19Iu/r98UGfCTUWQU4RhOiQFvYsuC UHhAmgUFXWWV8nlVOLzJNpjmHVmovdXXeY6I5cASBrJxD+S17acG+4oscZrctbysrux5 GJV8LXQdfWgJTatHEY+Q3AgLBuECirtQlEnsNdWT5PSeyPBtzBgy79fPiBriSzLmRaf0 wh1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6eMatyaU8zFd8Q8TvQc5SNMWc7sFi+EMVDUCvYNO8HI=; b=dIPJLv9HS8lnvUlPzqkmSTu2H1CGMeE26qdwFLWD+TmnkAXyDtPNmHuYxJO9JbFJxR 1i0Ty/geHe/qvUqqK6xYj8tCjrVNxaX9+6NPgm06OmdQavFsZSKDqfHFWsIVb4RKPR09 NV5nb8c9YhKNQQv6GGarv/dgHec5P6zJtVjvouua7n5xo/LkXefnXms9CleDnK0cXxwx LI3rZu658yjV/X1JeTyo2SUDRIkN70ffAKDvlimCbMPsIsKjXxPiVqxQ39EOTupGEoRT PsXiIjUcWt2s5zUP0oVVTsw2EOF9DWJHh720bLymNAgW+4F2TL3PUoHDWzFx2FgvZcx4 4uHw== X-Gm-Message-State: AOAM532CxMWkS1nJqPG3++LHvYpggXjbAiNsLCooM1LwmD/BcxAEkiAw UoCBpnhCCAcvDlTpO0H6p0lnfw== X-Google-Smtp-Source: ABdhPJyHiNU1Qbdu42iiC2mX+tJD+USpt5tm+a/xmIdCmwoy1ECY/HXmB0eN9hveoKs5yMklN1nRqw== X-Received: by 2002:a7b:c8cd:: with SMTP id f13mr3294358wml.56.1610118154929; Fri, 08 Jan 2021 07:02:34 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:34 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker , Arnd Bergmann , Greg Kroah-Hartman , Zhou Wang Subject: [PATCH v9 05/10] uacce: Enable IOMMU_DEV_FEAT_IOPF Date: Fri, 8 Jan 2021 15:52:13 +0100 Message-Id: <20210108145217.2254447-6-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The IOPF (I/O Page Fault) feature is now enabled independently from the SVA feature, because some IOPF implementations are device-specific and do not require IOMMU support for PCIe PRI or Arm SMMU stall. Enable IOPF unconditionally when enabling SVA for now. In the future, if a device driver implementing a uacce interface doesn't need IOPF support, it will need to tell the uacce module, for example with a new flag. Signed-off-by: Jean-Philippe Brucker Acked-by: Zhangfei Gao --- Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Cc: Zhangfei Gao Cc: Zhou Wang --- drivers/misc/uacce/uacce.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c index d07af4edfcac..41ef1eb62a14 100644 --- a/drivers/misc/uacce/uacce.c +++ b/drivers/misc/uacce/uacce.c @@ -385,6 +385,24 @@ static void uacce_release(struct device *dev) kfree(uacce); } +static unsigned int uacce_enable_sva(struct device *parent, unsigned int flags) +{ + if (!(flags & UACCE_DEV_SVA)) + return flags; + + flags &= ~UACCE_DEV_SVA; + + if (iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_IOPF)) + return flags; + + if (iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA)) { + iommu_dev_disable_feature(parent, IOMMU_DEV_FEAT_IOPF); + return flags; + } + + return flags | UACCE_DEV_SVA; +} + /** * uacce_alloc() - alloc an accelerator * @parent: pointer of uacce parent device @@ -404,11 +422,7 @@ struct uacce_device *uacce_alloc(struct device *parent, if (!uacce) return ERR_PTR(-ENOMEM); - if (flags & UACCE_DEV_SVA) { - ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA); - if (ret) - flags &= ~UACCE_DEV_SVA; - } + flags = uacce_enable_sva(parent, flags); uacce->parent = parent; uacce->flags = flags; @@ -432,8 +446,10 @@ struct uacce_device *uacce_alloc(struct device *parent, return uacce; err_with_uacce: - if (flags & UACCE_DEV_SVA) + if (flags & UACCE_DEV_SVA) { iommu_dev_disable_feature(uacce->parent, IOMMU_DEV_FEAT_SVA); + iommu_dev_disable_feature(uacce->parent, IOMMU_DEV_FEAT_IOPF); + } kfree(uacce); return ERR_PTR(ret); } @@ -487,8 +503,10 @@ void uacce_remove(struct uacce_device *uacce) mutex_unlock(&uacce->queues_lock); /* disable sva now since no opened queues */ - if (uacce->flags & UACCE_DEV_SVA) + if (uacce->flags & UACCE_DEV_SVA) { iommu_dev_disable_feature(uacce->parent, IOMMU_DEV_FEAT_SVA); + iommu_dev_disable_feature(uacce->parent, IOMMU_DEV_FEAT_IOPF); + } if (uacce->cdev) cdev_device_del(uacce->cdev, &uacce->dev); From patchwork Fri Jan 8 14:52:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 358985 Delivered-To: patch@linaro.org Received: by 2002:a17:906:4755:0:0:0:0 with SMTP id j21csp396457ejs; Fri, 8 Jan 2021 07:04:32 -0800 (PST) X-Google-Smtp-Source: ABdhPJwB8MbQA8AaQ0ZotmZB6E4YXuRgipmAjRbssUfILlFCj54Tp0hFboRt97mObiYzO0eiBN8i X-Received: by 2002:a17:907:20dc:: with SMTP id qq28mr2853606ejb.403.1610118272146; Fri, 08 Jan 2021 07:04:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610118272; cv=none; d=google.com; s=arc-20160816; b=pdN4Cb13Dppw9B84gpdcolN04ONDEcbq5lxag0dLCaMBsW4bxvB9sBbtqaYbEMoGyW 1GEyMAdT4XkBqN0uXeHTXWdya1BN7SIAUY9cZE20Gf+kCQ9L4psvbthKFgfk8rzPwkzI NprgNOotBzi5MLNDIsMFKYJt4d6o2A0R5hwSMs0htlgN+hmn+25YmrtvM5UFL90502DZ 8sHmTf6iUPRvEnpvylydxLGF1S7SwyQvv7XbnCy4HuPHr6470oLNHOoaXrI5OSocsN0d WnmiiCCFJMND/5c1fnlXN4ArBFcrHGurE68FfN4eEKktt8OQj/pF5B41V12trnrJe6P4 sAGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PV/9lf8J6A+VXh8x4XHBCxczDhhwNotp37UK73tJcbU=; b=aMqqNbZDWruie7ypMQ8Js6FE/bD19gceG9tJwti3lVSC40CXcQo/TOn4pmiFWyiu33 eK7DrcA2fNQ2wtg5QrOAEaoOUejFvsu4Jvc7rYYTx4IaNaRYUITM3/OXUvdqPBaO03NS xlK5aUnkW85/75w8tRqwHmmtXG2hEcFltmFGOmsrqHGcNItQXxXx0R6tNoFhQyfBtVMr LYBIoQu530IkRSOJlnpKK8LQBLdjo5Z9fJhHTh01FkxtEZMGFHf/P7ejqbu/Z4+Jcz5v M/ZTasQQKCMZZu+Lngj0VjTnG/CnzpI6WrguT+pl/7xk/hLOlwNpEtMJQnQCTb7K0B3y KTtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tqmdQ9KF; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cy28si3658900edb.535.2021.01.08.07.04.31; Fri, 08 Jan 2021 07:04:32 -0800 (PST) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tqmdQ9KF; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727471AbhAHPEa (ORCPT + 6 others); Fri, 8 Jan 2021 10:04:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726751AbhAHPEa (ORCPT ); Fri, 8 Jan 2021 10:04:30 -0500 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 190B3C0612AF for ; Fri, 8 Jan 2021 07:02:47 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id g25so6888714wmh.1 for ; Fri, 08 Jan 2021 07:02:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PV/9lf8J6A+VXh8x4XHBCxczDhhwNotp37UK73tJcbU=; b=tqmdQ9KF9CvTMXHbnLGx62iCE1bHtHb6qI3kfDVWSW/pyzZPkgAsNcYzfpOEoH3TNI uiSlsMR1eoxchzSo/qdbmiMY86IIfQmmwgj9HemF2ChTGzolkHiYs4YVmObq2Af9FIQg buM3BOaqUCrO+24BhPcNqxyM9eW0vzBvV4eAbnm/DXc0HHBW3h8LDF2oDX8JaSWMh9SU EuHaoUBTgauEseObtKNbNcrmJNrYWu8fsJGL4rEqfgEaimO8BAFeyINdrJq34bnU2xXV +5v92dSE7UJOYuLeOWouYFCq/fUB9Q60dRgfhcPe6ethooPJZPjd9bF2FmCipcDxbaRO dUhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PV/9lf8J6A+VXh8x4XHBCxczDhhwNotp37UK73tJcbU=; b=W7KqQ27fGo1TaTvMrt8jVJQNoefyofjqwjo7TJVJJRs94IMfWtS43lGIYRrV5yI+jg CVDdsH76XaYoF8mP0kfmInnqSNjv8qGYbTKHLy/B7CCFr+hHIq+9q35N2Q9c0dVP3j7E JqnBvHsdAHW3uu0QVD7szC2/bye4iBVH2dQckzzKJuQ6ykve5CJuqYwMaQP7GJ5f4ifr p4iJqdBOgesX9Qi1SXJ1tmo4b3t/qInfJNh1kNvsesgv2mc/a365m5owgoZ5Dr/iibLf bATIa+3pFAg5j/tO+MPtZ8UnKmDoHUR/02zBmBLB6YGwM5E/POfPG7Am+MyHyLTT+53x xotw== X-Gm-Message-State: AOAM5310JVoQGrZNaPALgSRK7wXop9JbmOt3Z2R99c6yhgmoTZbrr2Fa UYQKnEx5wPFiDJhvHxFEaR1TWQ== X-Received: by 2002:a1c:e1c6:: with SMTP id y189mr3316457wmg.172.1610118156387; Fri, 08 Jan 2021 07:02:36 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:35 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker Subject: [PATCH v9 06/10] iommu: Add a page fault handler Date: Fri, 8 Jan 2021 15:52:14 +0100 Message-Id: <20210108145217.2254447-7-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCIe PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was added to the IOMMU core by commit 0c830e6b3282 ("iommu: Introduce device fault report API"). Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, supposedly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). When the handler succeeds, the IOMMU retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Makefile | 1 + drivers/iommu/iommu-sva-lib.h | 53 ++++ include/linux/iommu.h | 2 + drivers/iommu/io-pgfault.c | 462 ++++++++++++++++++++++++++++++++++ 4 files changed, 518 insertions(+) create mode 100644 drivers/iommu/io-pgfault.c -- 2.29.2 Reviewed-by: Jonathan Cameron diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 61bd30cd8369..60fafc23dee6 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -28,3 +28,4 @@ obj-$(CONFIG_S390_IOMMU) += s390-iommu.o obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o obj-$(CONFIG_IOMMU_SVA_LIB) += iommu-sva-lib.o +obj-$(CONFIG_IOMMU_SVA_LIB) += io-pgfault.o diff --git a/drivers/iommu/iommu-sva-lib.h b/drivers/iommu/iommu-sva-lib.h index b40990aef3fd..031155010ca8 100644 --- a/drivers/iommu/iommu-sva-lib.h +++ b/drivers/iommu/iommu-sva-lib.h @@ -12,4 +12,57 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max); void iommu_sva_free_pasid(struct mm_struct *mm); struct mm_struct *iommu_sva_find(ioasid_t pasid); +/* I/O Page fault */ +struct device; +struct iommu_fault; +struct iopf_queue; + +#ifdef CONFIG_IOMMU_SVA_LIB +int iommu_queue_iopf(struct iommu_fault *fault, void *cookie); + +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); +int iopf_queue_remove_device(struct iopf_queue *queue, + struct device *dev); +int iopf_queue_flush_dev(struct device *dev); +struct iopf_queue *iopf_queue_alloc(const char *name); +void iopf_queue_free(struct iopf_queue *queue); +int iopf_queue_discard_partial(struct iopf_queue *queue); + +#else /* CONFIG_IOMMU_SVA_LIB */ +static inline int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +{ + return -ENODEV; +} + +static inline int iopf_queue_add_device(struct iopf_queue *queue, + struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_remove_device(struct iopf_queue *queue, + struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_flush_dev(struct device *dev) +{ + return -ENODEV; +} + +static inline struct iopf_queue *iopf_queue_alloc(const char *name) +{ + return NULL; +} + +static inline void iopf_queue_free(struct iopf_queue *queue) +{ +} + +static inline int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + return -ENODEV; +} +#endif /* CONFIG_IOMMU_SVA_LIB */ #endif /* _IOMMU_SVA_LIB_H */ diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 701b2eeb0dc5..1c721f472d25 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -366,6 +366,7 @@ struct iommu_fault_param { * struct dev_iommu - Collection of per-device IOMMU data * * @fault_param: IOMMU detected device fault reporting data + * @iopf_param: I/O Page Fault queue and data * @fwspec: IOMMU fwspec data * @iommu_dev: IOMMU device this device is linked to * @priv: IOMMU Driver private data @@ -376,6 +377,7 @@ struct iommu_fault_param { struct dev_iommu { struct mutex lock; struct iommu_fault_param *fault_param; + struct iopf_device_param *iopf_param; struct iommu_fwspec *fwspec; struct iommu_device *iommu_dev; void *priv; diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c new file mode 100644 index 000000000000..fc1d5d29ac37 --- /dev/null +++ b/drivers/iommu/io-pgfault.c @@ -0,0 +1,462 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Handle device page faults + * + * Copyright (C) 2020 ARM Ltd. + */ + +#include +#include +#include +#include +#include + +#include "iommu-sva-lib.h" + +/** + * struct iopf_queue - IO Page Fault queue + * @wq: the fault workqueue + * @devices: devices attached to this queue + * @lock: protects the device list + */ +struct iopf_queue { + struct workqueue_struct *wq; + struct list_head devices; + struct mutex lock; +}; + +/** + * struct iopf_device_param - IO Page Fault data attached to a device + * @dev: the device that owns this param + * @queue: IOPF queue + * @queue_list: index into queue->devices + * @partial: faults that are part of a Page Request Group for which the last + * request hasn't been submitted yet. + */ +struct iopf_device_param { + struct device *dev; + struct iopf_queue *queue; + struct list_head queue_list; + struct list_head partial; +}; + +struct iopf_fault { + struct iommu_fault fault; + struct list_head list; +}; + +struct iopf_group { + struct iopf_fault last_fault; + struct list_head faults; + struct work_struct work; + struct device *dev; +}; + +static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, + enum iommu_page_response_code status) +{ + struct iommu_page_response resp = { + .version = IOMMU_PAGE_RESP_VERSION_1, + .pasid = iopf->fault.prm.pasid, + .grpid = iopf->fault.prm.grpid, + .code = status, + }; + + if ((iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) && + (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID)) + resp.flags = IOMMU_PAGE_RESP_PASID_VALID; + + return iommu_page_response(dev, &resp); +} + +static enum iommu_page_response_code +iopf_handle_single(struct iopf_fault *iopf) +{ + vm_fault_t ret; + struct mm_struct *mm; + struct vm_area_struct *vma; + unsigned int access_flags = 0; + unsigned int fault_flags = FAULT_FLAG_REMOTE; + struct iommu_fault_page_request *prm = &iopf->fault.prm; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; + + if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) + return status; + + mm = iommu_sva_find(prm->pasid); + if (IS_ERR_OR_NULL(mm)) + return status; + + mmap_read_lock(mm); + + vma = find_extend_vma(mm, prm->addr); + if (!vma) + /* Unmapped area */ + goto out_put_mm; + + if (prm->perm & IOMMU_FAULT_PERM_READ) + access_flags |= VM_READ; + + if (prm->perm & IOMMU_FAULT_PERM_WRITE) { + access_flags |= VM_WRITE; + fault_flags |= FAULT_FLAG_WRITE; + } + + if (prm->perm & IOMMU_FAULT_PERM_EXEC) { + access_flags |= VM_EXEC; + fault_flags |= FAULT_FLAG_INSTRUCTION; + } + + if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) + fault_flags |= FAULT_FLAG_USER; + + if (access_flags & ~vma->vm_flags) + /* Access fault */ + goto out_put_mm; + + ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL); + status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : + IOMMU_PAGE_RESP_SUCCESS; + +out_put_mm: + mmap_read_unlock(mm); + mmput(mm); + + return status; +} + +static void iopf_handle_group(struct work_struct *work) +{ + struct iopf_group *group; + struct iopf_fault *iopf, *next; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; + + group = container_of(work, struct iopf_group, work); + + list_for_each_entry_safe(iopf, next, &group->faults, list) { + /* + * For the moment, errors are sticky: don't handle subsequent + * faults in the group if there is an error. + */ + if (status == IOMMU_PAGE_RESP_SUCCESS) + status = iopf_handle_single(iopf); + + if (!(iopf->fault.prm.flags & + IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) + kfree(iopf); + } + + iopf_complete_group(group->dev, &group->last_fault, status); + kfree(group); +} + +/** + * iommu_queue_iopf - IO Page Fault handler + * @fault: fault event + * @cookie: struct device, passed to iommu_register_device_fault_handler. + * + * Add a fault to the device workqueue, to be handled by mm. + * + * This module doesn't handle PCI PASID Stop Marker; IOMMU drivers must discard + * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't + * expect a response. It may be generated when disabling a PASID (issuing a + * PASID stop request) by some PCI devices. + * + * The PASID stop request is issued by the device driver before unbind(). Once + * it completes, no page request is generated for this PASID anymore and + * outstanding ones have been pushed to the IOMMU (as per PCIe 4.0r1.0 - 6.20.1 + * and 10.4.1.2 - Managing PASID TLP Prefix Usage). Some PCI devices will wait + * for all outstanding page requests to come back with a response before + * completing the PASID stop request. Others do not wait for page responses, and + * instead issue this Stop Marker that tells us when the PASID can be + * reallocated. + * + * It is safe to discard the Stop Marker because it is an optimization. + * a. Page requests, which are posted requests, have been flushed to the IOMMU + * when the stop request completes. + * b. The IOMMU driver flushes all fault queues on unbind() before freeing the + * PASID. + * + * So even though the Stop Marker might be issued by the device *after* the stop + * request completes, outstanding faults will have been dealt with by the time + * the PASID is freed. + * + * Return: 0 on success and <0 on error. + */ +int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +{ + int ret; + struct iopf_group *group; + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + + struct device *dev = cookie; + struct dev_iommu *param = dev->iommu; + + lockdep_assert_held(¶m->lock); + + if (fault->type != IOMMU_FAULT_PAGE_REQ) + /* Not a recoverable page fault */ + return -EOPNOTSUPP; + + /* + * As long as we're holding param->lock, the queue can't be unlinked + * from the device and therefore cannot disappear. + */ + iopf_param = param->iopf_param; + if (!iopf_param) + return -ENODEV; + + if (!(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { + iopf = kzalloc(sizeof(*iopf), GFP_KERNEL); + if (!iopf) + return -ENOMEM; + + iopf->fault = *fault; + + /* Non-last request of a group. Postpone until the last one */ + list_add(&iopf->list, &iopf_param->partial); + + return 0; + } + + group = kzalloc(sizeof(*group), GFP_KERNEL); + if (!group) { + /* + * The caller will send a response to the hardware. But we do + * need to clean up before leaving, otherwise partial faults + * will be stuck. + */ + ret = -ENOMEM; + goto cleanup_partial; + } + + group->dev = dev; + group->last_fault.fault = *fault; + INIT_LIST_HEAD(&group->faults); + list_add(&group->last_fault.list, &group->faults); + INIT_WORK(&group->work, iopf_handle_group); + + /* See if we have partial faults for this group */ + list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { + if (iopf->fault.prm.grpid == fault->prm.grpid) + /* Insert *before* the last fault */ + list_move(&iopf->list, &group->faults); + } + + queue_work(iopf_param->queue->wq, &group->work); + return 0; + +cleanup_partial: + list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { + if (iopf->fault.prm.grpid == fault->prm.grpid) { + list_del(&iopf->list); + kfree(iopf); + } + } + return ret; +} +EXPORT_SYMBOL_GPL(iommu_queue_iopf); + +/** + * iopf_queue_flush_dev - Ensure that all queued faults have been processed + * @dev: the endpoint whose faults need to be flushed. + * + * The IOMMU driver calls this before releasing a PASID, to ensure that all + * pending faults for this PASID have been handled, and won't hit the address + * space of the next process that uses this PASID. The driver must make sure + * that no new fault is added to the queue. In particular it must flush its + * low-level queue before calling this function. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_flush_dev(struct device *dev) +{ + int ret = 0; + struct iopf_device_param *iopf_param; + struct dev_iommu *param = dev->iommu; + + if (!param) + return -ENODEV; + + mutex_lock(¶m->lock); + iopf_param = param->iopf_param; + if (iopf_param) + flush_workqueue(iopf_param->queue->wq); + else + ret = -ENODEV; + mutex_unlock(¶m->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); + +/** + * iopf_queue_discard_partial - Remove all pending partial fault + * @queue: the queue whose partial faults need to be discarded + * + * When the hardware queue overflows, last page faults in a group may have been + * lost and the IOMMU driver calls this to discard all partial faults. The + * driver shouldn't be adding new faults to this queue concurrently. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + + if (!queue) + return -EINVAL; + + mutex_lock(&queue->lock); + list_for_each_entry(iopf_param, &queue->devices, queue_list) { + list_for_each_entry_safe(iopf, next, &iopf_param->partial, + list) { + list_del(&iopf->list); + kfree(iopf); + } + } + mutex_unlock(&queue->lock); + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_discard_partial); + +/** + * iopf_queue_add_device - Add producer to the fault queue + * @queue: IOPF queue + * @dev: device to add + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) +{ + int ret = -EBUSY; + struct iopf_device_param *iopf_param; + struct dev_iommu *param = dev->iommu; + + if (!param) + return -ENODEV; + + iopf_param = kzalloc(sizeof(*iopf_param), GFP_KERNEL); + if (!iopf_param) + return -ENOMEM; + + INIT_LIST_HEAD(&iopf_param->partial); + iopf_param->queue = queue; + iopf_param->dev = dev; + + mutex_lock(&queue->lock); + mutex_lock(¶m->lock); + if (!param->iopf_param) { + list_add(&iopf_param->queue_list, &queue->devices); + param->iopf_param = iopf_param; + ret = 0; + } + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); + + if (ret) + kfree(iopf_param); + + return ret; +} +EXPORT_SYMBOL_GPL(iopf_queue_add_device); + +/** + * iopf_queue_remove_device - Remove producer from fault queue + * @queue: IOPF queue + * @dev: device to remove + * + * Caller makes sure that no more faults are reported for this device. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) +{ + int ret = 0; + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + struct dev_iommu *param = dev->iommu; + + if (!param || !queue) + return -EINVAL; + + mutex_lock(&queue->lock); + mutex_lock(¶m->lock); + iopf_param = param->iopf_param; + if (iopf_param && iopf_param->queue == queue) { + list_del(&iopf_param->queue_list); + param->iopf_param = NULL; + } else { + ret = -EINVAL; + } + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); + if (ret) + return ret; + + /* Just in case some faults are still stuck */ + list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) + kfree(iopf); + + kfree(iopf_param); + + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_remove_device); + +/** + * iopf_queue_alloc - Allocate and initialize a fault queue + * @name: a unique string identifying the queue (for workqueue) + * + * Return: the queue on success and NULL on error. + */ +struct iopf_queue *iopf_queue_alloc(const char *name) +{ + struct iopf_queue *queue; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + if (!queue) + return NULL; + + /* + * The WQ is unordered because the low-level handler enqueues faults by + * group. PRI requests within a group have to be ordered, but once + * that's dealt with, the high-level function can handle groups out of + * order. + */ + queue->wq = alloc_workqueue("iopf_queue/%s", WQ_UNBOUND, 0, name); + if (!queue->wq) { + kfree(queue); + return NULL; + } + + INIT_LIST_HEAD(&queue->devices); + mutex_init(&queue->lock); + + return queue; +} +EXPORT_SYMBOL_GPL(iopf_queue_alloc); + +/** + * iopf_queue_free - Free IOPF queue + * @queue: queue to free + * + * Counterpart to iopf_queue_alloc(). The driver must not be queuing faults or + * adding/removing devices on this queue anymore. + */ +void iopf_queue_free(struct iopf_queue *queue) +{ + struct iopf_device_param *iopf_param, *next; + + if (!queue) + return; + + list_for_each_entry_safe(iopf_param, next, &queue->devices, queue_list) + iopf_queue_remove_device(queue, iopf_param->dev); + + destroy_workqueue(queue->wq); + kfree(queue); +} +EXPORT_SYMBOL_GPL(iopf_queue_free); From patchwork Fri Jan 8 14:52:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECFDAC4332B for ; Fri, 8 Jan 2021 15:03:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B001A23A04 for ; Fri, 8 Jan 2021 15:03:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726935AbhAHPDw (ORCPT ); Fri, 8 Jan 2021 10:03:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727276AbhAHPDv (ORCPT ); Fri, 8 Jan 2021 10:03:51 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1109EC0612A4 for ; Fri, 8 Jan 2021 07:02:39 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id q18so9348681wrn.1 for ; Fri, 08 Jan 2021 07:02:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rq9YnnCF9pdwgtYQnAeAXZZ3IcyhY6ekEgWvAJDOTvU=; b=IOiIJ8XNWCYvh1BpdfwseM30i/6ubKq3gTxQ3kfsHcKH4vdf6xouFBxx92TK+XyBOV lfx3XKMqBbNHQoYc+5nHBlyllKI2q7MiUoiOryqcfXxUF2Ndo/Fv2p0r3cOSuHPKr5gu PfgQUwRqI2avykbcVZ8/sl8HrU5jrEUbEVqqSmwK97m4fyhX7QYvkXmSf8mKj7Ur1+P7 Ljp1Sx222jgPaZDOdeXXJqd6WAK5UXSGrzvUtbNo3cEVsnkKBq1kUcbMrQBzeIwzGU1P NdabWgqIx+pZmLWdopyFpUire4LOjhNFMCxYSnMgppg3i6OxXi+DYEdeML5Gy63tWc9e PgiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rq9YnnCF9pdwgtYQnAeAXZZ3IcyhY6ekEgWvAJDOTvU=; b=I4djmlMxzWaKq3FUlymA1/uN6hQ8rYpfG49MsO0DMfjHUCuil3EMAfBGNJ4u+EoL8D p2SW46fhKhb4BZvj7kWcXeQjWxz7cUR/GUTWWaHKIz2BtI6N7xFhrntKetgXnN808DUn A9dyPXLB2lbkOzcS1EZ6061c/7LPwVFlOk3BAq2WN/CId+5cO34Dl2p+FzJbjzAabsPz cJepzhq2lODtRrB3Y8y/job9ZJlDhO26+JdXa0VFM5FDAlNhVn2VIVwcZmmFO33GKUNp 5NzsqocXl6u3NKReW3HVWKAULat498LAv8u1FadesLIszPWeThTjZ1IjEyNvK21FNtt1 ijfQ== X-Gm-Message-State: AOAM531xwXZzAR45auQQd7ec6TwOvEvVwPgFkcyHBj12zvVGUQVMo5id TBvUYgoedv5wVAQ/D9JyFp8g3Q== X-Google-Smtp-Source: ABdhPJwJCscGG7pODg02NfrYn2+RgYVb0v/9gne/VpBeP9PUNk6Pbf8YDqMssBbm/DGKac875QXFsQ== X-Received: by 2002:a5d:4ccf:: with SMTP id c15mr4079408wrt.237.1610118157731; Fri, 08 Jan 2021 07:02:37 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:37 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker Subject: [PATCH v9 07/10] iommu/arm-smmu-v3: Maintain a SID->device structure Date: Fri, 8 Jan 2021 15:52:15 +0100 Message-Id: <20210108145217.2254447-8-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When handling faults from the event or PRI queue, we need to find the struct device associated with a SID. Add a rb_tree to keep track of SIDs. Signed-off-by: Jean-Philippe Brucker Acked-by: Jonathan Cameron --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 13 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 161 ++++++++++++++++---- 2 files changed, 144 insertions(+), 30 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 96c2e9565e00..8ef6a1c48635 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -636,6 +636,15 @@ struct arm_smmu_device { /* IOMMU core code handle */ struct iommu_device iommu; + + struct rb_root streams; + struct mutex streams_mutex; +}; + +struct arm_smmu_stream { + u32 id; + struct arm_smmu_master *master; + struct rb_node node; }; /* SMMU private data for each master */ @@ -644,8 +653,8 @@ struct arm_smmu_master { struct device *dev; struct arm_smmu_domain *domain; struct list_head domain_head; - u32 *sids; - unsigned int num_sids; + struct arm_smmu_stream *streams; + unsigned int num_streams; bool ats_enabled; bool sva_enabled; struct list_head bonds; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 6a53b4edf054..2dbae2e6965d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -912,8 +912,8 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, spin_lock_irqsave(&smmu_domain->devices_lock, flags); list_for_each_entry(master, &smmu_domain->devices, domain_head) { - for (i = 0; i < master->num_sids; i++) { - cmd.cfgi.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.cfgi.sid = master->streams[i].id; arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd); } } @@ -1355,6 +1355,32 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } +__maybe_unused +static struct arm_smmu_master * +arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) +{ + struct rb_node *node; + struct arm_smmu_stream *stream; + struct arm_smmu_master *master = NULL; + + mutex_lock(&smmu->streams_mutex); + node = smmu->streams.rb_node; + while (node) { + stream = rb_entry(node, struct arm_smmu_stream, node); + if (stream->id < sid) { + node = node->rb_right; + } else if (stream->id > sid) { + node = node->rb_left; + } else { + master = stream->master; + break; + } + } + mutex_unlock(&smmu->streams_mutex); + + return master; +} + /* IRQ and event handlers */ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { @@ -1588,8 +1614,8 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master) arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd); - for (i = 0; i < master->num_sids; i++) { - cmd.atc.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.atc.sid = master->streams[i].id; arm_smmu_cmdq_issue_cmd(master->smmu, &cmd); } @@ -1632,8 +1658,8 @@ int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid, if (!master->ats_enabled) continue; - for (i = 0; i < master->num_sids; i++) { - cmd.atc.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.atc.sid = master->streams[i].id; arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd); } } @@ -2040,13 +2066,13 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master) int i, j; struct arm_smmu_device *smmu = master->smmu; - for (i = 0; i < master->num_sids; ++i) { - u32 sid = master->sids[i]; + for (i = 0; i < master->num_streams; ++i) { + u32 sid = master->streams[i].id; __le64 *step = arm_smmu_get_step_for_sid(smmu, sid); /* Bridged PCI devices may end up with duplicated IDs */ for (j = 0; j < i; j++) - if (master->sids[j] == sid) + if (master->streams[j].id == sid) break; if (j < i) continue; @@ -2319,11 +2345,101 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } +static int arm_smmu_insert_master(struct arm_smmu_device *smmu, + struct arm_smmu_master *master) +{ + int i; + int ret = 0; + struct arm_smmu_stream *new_stream, *cur_stream; + struct rb_node **new_node, *parent_node = NULL; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); + + master->streams = kcalloc(fwspec->num_ids, + sizeof(struct arm_smmu_stream), GFP_KERNEL); + if (!master->streams) + return -ENOMEM; + master->num_streams = fwspec->num_ids; + + mutex_lock(&smmu->streams_mutex); + for (i = 0; i < fwspec->num_ids && !ret; i++) { + u32 sid = fwspec->ids[i]; + + new_stream = &master->streams[i]; + new_stream->id = sid; + new_stream->master = master; + + /* + * Check the SIDs are in range of the SMMU and our stream table + */ + if (!arm_smmu_sid_in_range(smmu, sid)) { + ret = -ERANGE; + break; + } + + /* Ensure l2 strtab is initialised */ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + ret = arm_smmu_init_l2_strtab(smmu, sid); + if (ret) + break; + } + + /* Insert into SID tree */ + new_node = &(smmu->streams.rb_node); + while (*new_node) { + cur_stream = rb_entry(*new_node, struct arm_smmu_stream, + node); + parent_node = *new_node; + if (cur_stream->id > new_stream->id) { + new_node = &((*new_node)->rb_left); + } else if (cur_stream->id < new_stream->id) { + new_node = &((*new_node)->rb_right); + } else { + dev_warn(master->dev, + "stream %u already in tree\n", + cur_stream->id); + ret = -EINVAL; + break; + } + } + + if (!ret) { + rb_link_node(&new_stream->node, parent_node, new_node); + rb_insert_color(&new_stream->node, &smmu->streams); + } + } + + if (ret) { + for (; i > 0; i--) + rb_erase(&master->streams[i].node, &smmu->streams); + kfree(master->streams); + } + mutex_unlock(&smmu->streams_mutex); + + return ret; +} + +static void arm_smmu_remove_master(struct arm_smmu_master *master) +{ + int i; + struct arm_smmu_device *smmu = master->smmu; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); + + if (!smmu || !master->streams) + return; + + mutex_lock(&smmu->streams_mutex); + for (i = 0; i < fwspec->num_ids; i++) + rb_erase(&master->streams[i].node, &smmu->streams); + mutex_unlock(&smmu->streams_mutex); + + kfree(master->streams); +} + static struct iommu_ops arm_smmu_ops; static struct iommu_device *arm_smmu_probe_device(struct device *dev) { - int i, ret; + int ret; struct arm_smmu_device *smmu; struct arm_smmu_master *master; struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); @@ -2344,27 +2460,12 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) master->dev = dev; master->smmu = smmu; - master->sids = fwspec->ids; - master->num_sids = fwspec->num_ids; INIT_LIST_HEAD(&master->bonds); dev_iommu_priv_set(dev, master); - /* Check the SIDs are in range of the SMMU and our stream table */ - for (i = 0; i < master->num_sids; i++) { - u32 sid = master->sids[i]; - - if (!arm_smmu_sid_in_range(smmu, sid)) { - ret = -ERANGE; - goto err_free_master; - } - - /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - ret = arm_smmu_init_l2_strtab(smmu, sid); - if (ret) - goto err_free_master; - } - } + ret = arm_smmu_insert_master(smmu, master); + if (ret) + goto err_free_master; device_property_read_u32(dev, "pasid-num-bits", &master->ssid_bits); master->ssid_bits = min(smmu->ssid_bits, master->ssid_bits); @@ -2403,6 +2504,7 @@ static void arm_smmu_release_device(struct device *dev) WARN_ON(arm_smmu_master_sva_enabled(master)); arm_smmu_detach_dev(master); arm_smmu_disable_pasid(master); + arm_smmu_remove_master(master); kfree(master); iommu_fwspec_free(dev); } @@ -2825,6 +2927,9 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; + mutex_init(&smmu->streams_mutex); + smmu->streams = RB_ROOT; + ret = arm_smmu_init_queues(smmu); if (ret) return ret; From patchwork Fri Jan 8 14:52:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 358987 Delivered-To: patch@linaro.org Received: by 2002:a17:906:4755:0:0:0:0 with SMTP id j21csp397528ejs; Fri, 8 Jan 2021 07:05:36 -0800 (PST) X-Google-Smtp-Source: ABdhPJyD93aHF/nojvHkp5qXA9rBKX23MdeMNiZ89reWuhXlG2JXCbq7/Z+6hiaZd2bdmWUiZBZW X-Received: by 2002:a17:906:f9d7:: with SMTP id lj23mr3008628ejb.266.1610118245221; Fri, 08 Jan 2021 07:04:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610118245; cv=none; d=google.com; s=arc-20160816; b=cEBczAgQh5MeQec1cVLnyKzkWPkOyo+eQCQ3b2OYMWguNUb0teYCumiqVv85MOFbn4 DxsG6HOv5swk7VFhrFtoXuM/IwS6RjsZUHKEbiYj+AhLCrnwSjUC7bQIh9Qlm3oKiOy0 DzvQ0KZjyYtPdX8wPIPMhrRAAg5rHvgDhrK2htQ6cKu7adVe97XO03DXVT5Jg3Bf8MpQ v+vtIRe0t5f6GVat9xUUevroiR86xh1xm8lNWGLDTYO8fjYW5xLWxTYRy0L5MvVHAeyT B8gghKTBqeyniyWlVtSdCJsiop2e795p/UR8oH/zpJ4YtU8AcMfQ05l0vMME8ffDSQbx 6DUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=glgQFW7kM8hmWXtoybiQQhFk+nRduu62qvfDCI4eUuo=; b=N8DbjvpKGPSY/UjNxTZ8XjqBnbUbkLTIAlweJ/FBt8pcSmFrBYyzC+w3SLjZKPQ9LH 3F2AXJVdISUg8gwQX9+O1w1oLCkeK7mwj+mOcZOP2lxj6D/yOCQwbkAMe3mREoL1qBf1 jB6rWWS1lTl4RW+WiCUb1yZa3WT360Lm2vrhCe54YNSXmLgalHPdp+FTktkyqaub82hl zkM4Lh9A8VbEBZM5E4OqUwNp4dee9o4gbRFEm+p38uqXdZKn8ND4eF5xEttqGX5K9GgL McK18jhrlLEYoiPLr81Tn6LRr18c6hlqaJJ9LzI+eShAAfzjrselP8mUGqNnKCGd6N8S Tw+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MG97V7CT; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cy28si3658900edb.535.2021.01.08.07.04.04; Fri, 08 Jan 2021 07:04:05 -0800 (PST) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MG97V7CT; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727283AbhAHPDx (ORCPT + 6 others); Fri, 8 Jan 2021 10:03:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727380AbhAHPDv (ORCPT ); Fri, 8 Jan 2021 10:03:51 -0500 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 887AAC0612A6 for ; Fri, 8 Jan 2021 07:02:40 -0800 (PST) Received: by mail-wm1-x32d.google.com with SMTP id a6so8039287wmc.2 for ; Fri, 08 Jan 2021 07:02:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=glgQFW7kM8hmWXtoybiQQhFk+nRduu62qvfDCI4eUuo=; b=MG97V7CT/lZ/B9evctCewK363gKZ+3KiM85e7mwnaEFFhfvdmANNctUZMCkWea6m2c y9JbDyd83+QZWNebbFl1XohTEAfiymYtPnr8I5QXFNso7vxshtjN+2Kr9XX7XfxRGoVF kdqbMiLyEnhELcyFWL33dIF+Z2j3VGFTU9ZwXiiuOzGxVt+wKTfopNOQMbOXSrcuq4G7 +6zWzaEl3bBn0BXKGW/dKkLgc/pnTUQcYL7XdWY4dOlRa7cMWQLTLb/aivovSNEunw9Y iaVGDOQ9FEpEiOktUQU70rzBN9DAl+JxI8MySkT7XOjWQz3KypPDtDfUJFGenPxpQMV9 XOiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=glgQFW7kM8hmWXtoybiQQhFk+nRduu62qvfDCI4eUuo=; b=qElkVjg/mqrk3rXryLKNEsrpULvnE0Wz8MPSWRNY1nKPOFwmI+PQAlnL23UF/gT/Sa O2OxqoRO8P80P04Bvz9CEnvvujQE8RGKZToaSxbuSJ7jZAev7xoe3/WSyWv8qYYCWLBa DvHP+ikHm3ml4Ci9jxDr+9hJMxzGBu+lpeqh4Bpgkye29OTThuURJjNVT57EQbrBN5RK e4v/ODU/m26EGG4mRvvbUyJjHTo+YLjssAMm5pIF5RgBoKSiM45DXB4LQIYc2LioGs7Q MHZW5/2LzU/VaiE2qQIf+2lBPZSGYdMub4sAuk6Hf5sFYj4/9SAsz2Vu79Gx4rInchHe Xi2w== X-Gm-Message-State: AOAM532DJVMKQlHZIPGZB3E+wm1fqOHXSTDLluFgvdRfGOoGP1QycKsV Q7sH+OAdlz6iVgpsBTH6zsgInw== X-Received: by 2002:a1c:2d6:: with SMTP id 205mr3469208wmc.60.1610118159297; Fri, 08 Jan 2021 07:02:39 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:38 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker , Rob Herring Subject: [PATCH v9 08/10] dt-bindings: document stall property for IOMMU masters Date: Fri, 8 Jan 2021 15:52:16 +0100 Message-Id: <20210108145217.2254447-9-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org On ARM systems, some platform devices behind an IOMMU may support stall, which is the ability to recover from page faults. Let the firmware tell us when a device supports stall. Reviewed-by: Rob Herring Signed-off-by: Jean-Philippe Brucker --- .../devicetree/bindings/iommu/iommu.txt | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) -- 2.29.2 diff --git a/Documentation/devicetree/bindings/iommu/iommu.txt b/Documentation/devicetree/bindings/iommu/iommu.txt index 3c36334e4f94..26ba9e530f13 100644 --- a/Documentation/devicetree/bindings/iommu/iommu.txt +++ b/Documentation/devicetree/bindings/iommu/iommu.txt @@ -92,6 +92,24 @@ Optional properties: tagging DMA transactions with an address space identifier. By default, this is 0, which means that the device only has one address space. +- dma-can-stall: When present, the master can wait for a transaction to + complete for an indefinite amount of time. Upon translation fault some + IOMMUs, instead of aborting the translation immediately, may first + notify the driver and keep the transaction in flight. This allows the OS + to inspect the fault and, for example, make physical pages resident + before updating the mappings and completing the transaction. Such IOMMU + accepts a limited number of simultaneous stalled transactions before + having to either put back-pressure on the master, or abort new faulting + transactions. + + Firmware has to opt-in stalling, because most buses and masters don't + support it. In particular it isn't compatible with PCI, where + transactions have to complete before a time limit. More generally it + won't work in systems and masters that haven't been designed for + stalling. For example the OS, in order to handle a stalled transaction, + may attempt to retrieve pages from secondary storage in a stalled + domain, leading to a deadlock. + Notes: ====== From patchwork Fri Jan 8 14:52:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 358986 Delivered-To: patch@linaro.org Received: by 2002:a17:906:4755:0:0:0:0 with SMTP id j21csp397260ejs; Fri, 8 Jan 2021 07:05:21 -0800 (PST) X-Google-Smtp-Source: ABdhPJxsG0u/5Xs1IuWd6eYisR7RytjpUljR3YqC5j9BTAXjL7Mx1dY80vP25+b5l6hhRZhrBYwY X-Received: by 2002:a05:6402:202e:: with SMTP id ay14mr5762372edb.102.1610118240554; Fri, 08 Jan 2021 07:04:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610118240; cv=none; d=google.com; s=arc-20160816; b=CJzr+/u2gL6w3WHFQRLFwOl/4ZkvR3flaijgudH07uGbZHe85N+mQco3Q8M2H8kstz oNd5emwNQ1xeJEkbSEM5nEzekB1zBO4ev03hHH2Jed4z/lco5PuPJrB98Hcysr/o+Rvh YL8UnVbyC5mD71gllfwi2BowKaqlWVpdP6prCPtlZ77NSK/AZRz1q6oknkGsVWoEDk/n cKXUq+ppe6zr3Ya7otw7BiePPlQiq4yvvSStfeTwGsCIr8R6vFHk6BNWyThH6/rnUUMO RKZPpDXIeTJZR92ZTd9awnzoUc7rjCuCcOw45ahRQ9wePRxhFxf1x1FYMvW5NOs75qGz N4iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=M6xu+NZsefpfo8WqjQL32PlY8NL0QuBVkGqSt5snD+A=; b=RbbLHOEMImbv0t/1AcUJT8tslyJO2SKtImmE0C/iDNOtJLD8WZszD+Fp47ex+doi08 V8VmblDm7I/UGvv9gYB0bfRCkB7CGs1sc7rnErUHIjtMBXOP3nf2LZWYrD/UfTX4719d R3M3HTIYgkOHJ9ftcDmRM1dJ7gRm/qES9Gcx/hZcp+0jAdL/4vMMFmrKqYMlqomh7Mq6 MoxdtUryYCHsrzIVP96yvqOXjJR91xGIpfFG03UhsVoADnIWhyuhOdMXjG5HF7jz/sOE Fq4A08rm4cW0dqPWDq33GlnPslpLwCogHNag7XIWSp8bisgQt/RbZksbeohA3jWCHrw6 ZVxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kxuUCMV1; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cy28si3658900edb.535.2021.01.08.07.04.00; Fri, 08 Jan 2021 07:04:00 -0800 (PST) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kxuUCMV1; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726735AbhAHPDw (ORCPT + 6 others); Fri, 8 Jan 2021 10:03:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726935AbhAHPDv (ORCPT ); Fri, 8 Jan 2021 10:03:51 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E367BC0612A9 for ; Fri, 8 Jan 2021 07:02:41 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id q18so9348857wrn.1 for ; Fri, 08 Jan 2021 07:02:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M6xu+NZsefpfo8WqjQL32PlY8NL0QuBVkGqSt5snD+A=; b=kxuUCMV1WAqpwwvoibNUsWYrlPgddFb1B3XDtYKriSKJqhZ7YxBdz7idYeFiPEusdj oSxwzMXDVtdAhRbiUIPtiWxDXm3EOdY0B1oZ4RFYK1eN63WYymsf6s1QMgxe1Fp+XJLm 7MYSyOGWzgv+FVOhkD8wVISPR0VlGxk9zmYIDuW8ATAmQ8SghsBmSoZJE31MhRs/IXU7 jcMsXHSkhBXTuv9sJLCqbDDqfbxZnhaRfmDInXFrql1M/HVbnCpK9DeM9XJWQDFfFBiX CVAh4zvNMBGGzxZ42Xa9GLW4zEO7yuaspaFGL8sBAQLWZMjutowtPglRsb0G7rH8euGW 4JKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M6xu+NZsefpfo8WqjQL32PlY8NL0QuBVkGqSt5snD+A=; b=lER3TcNY6nbqW91napMgiPEJAShCxyrBIXFLyvXg6LBbM3XzIpDudRT5JSqU3z5fqE F7l0mZ4JjjIy7xPmFLhL42BI3JOK+vZVcxHcjwqTNkufzdbbakYFXLoQupSaXpy3egXl i6eOHEpPN7QaaZajmHkjaandjckZjBEuKMxlHWO6AfWdfQ2GokgYJ3QKDhhBdGpBpTf7 9NVtBCP4pRkDKkZ+GzgBAvB6R2LNK9EpBO0G1RFjfCsal2cLyxIet2LQABd0SHKmInu7 Un7xu0yZ8DGz7R53zgZLF26u8YZw2/i2FMmIAtKjnGVWlQpiJngOty0KMLyPaeMG/awN Eveg== X-Gm-Message-State: AOAM531076Vu53KwLn8YwgU8+rED5GhCA9ITbDbRS4reNqHak5E7Sjdz JhhKjv42UnvR0hqoCidIVbjBgA== X-Received: by 2002:a05:6000:c9:: with SMTP id q9mr3946567wrx.259.1610118160590; Fri, 08 Jan 2021 07:02:40 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:39 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker Subject: [PATCH v9 09/10] ACPI/IORT: Enable stall support for platform devices Date: Fri, 8 Jan 2021 15:52:17 +0100 Message-Id: <20210108145217.2254447-10-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Copy the "Stall supported" bit, that tells whether a named component supports stall, into the dma-can-stall device property. Signed-off-by: Jean-Philippe Brucker --- v9: dropped fwspec member in favor of device properties --- drivers/acpi/arm64/iort.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- 2.29.2 diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index c9a8bbb74b09..42820d7eb869 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -968,13 +968,15 @@ static int iort_pci_iommu_init(struct pci_dev *pdev, u16 alias, void *data) static void iort_named_component_init(struct device *dev, struct acpi_iort_node *node) { - struct property_entry props[2] = {}; + struct property_entry props[3] = {}; struct acpi_iort_named_component *nc; nc = (struct acpi_iort_named_component *)node->node_data; props[0] = PROPERTY_ENTRY_U32("pasid-num-bits", FIELD_GET(ACPI_IORT_NC_PASID_BITS, nc->node_flags)); + if (nc->node_flags & ACPI_IORT_NC_STALL_SUPPORTED) + props[1] = PROPERTY_ENTRY_BOOL("dma-can-stall"); if (device_add_properties(dev, props)) dev_warn(dev, "Could not add device properties\n"); From patchwork Fri Jan 8 14:52:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 359901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BDA5C433E0 for ; Fri, 8 Jan 2021 15:04:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EAE9C23A04 for ; Fri, 8 Jan 2021 15:04:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727380AbhAHPER (ORCPT ); Fri, 8 Jan 2021 10:04:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726880AbhAHPEQ (ORCPT ); Fri, 8 Jan 2021 10:04:16 -0500 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9317AC0612AC for ; Fri, 8 Jan 2021 07:02:46 -0800 (PST) Received: by mail-wr1-x433.google.com with SMTP id 91so9318420wrj.7 for ; Fri, 08 Jan 2021 07:02:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JySunzUm9euuWzEyCAtxWcbP4opCXLzdZ9Kkph5mV5Y=; b=yR/uO9+Gtw5OJTUtK/3Lex8H3Eu3qPx5k/KTuBl3FurwiqfhiVVQWW8ZkdcjzN6OAq 06E3Ru3GldaXdVTZNbSrhauJXPL0OUkYZdgR55LZxbDGyjzAF52qV3HqDqDiwzUlj9KE CItVWXs++2WVj59ie3tpVjdtg/2271ZwNV6aEuFnG6LoMt4o7tr4ArK1D/xvjORMic/w aC8ZkpwRwjAPH+ZpYgWc2WCABPTNHNSz4vVER8W4V0rGp0+47LLFhqNwCaL5e695bgcY 727qcCcD/wjOJDNchmuiGG0M+ZoA722GbwFKZGTwJBaD9wPQ1AdnA76rCKNP3bjzvQV2 Es7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JySunzUm9euuWzEyCAtxWcbP4opCXLzdZ9Kkph5mV5Y=; b=Ys6ublphtaE/cA8dLfhK2eoVWC7znRImAYQN0UGcv0zK0g3DDKyIhQAzY6lUCjcCXD 7098vVcwkxcjgt/3Oc5GN1i8ykEiOa/G4VbhWOvPi9KBdCmaQEzWlW/4y8Lq9EEL52Cj SskREz5CQuImbbhprbSbRGl78QIWqo5fuBJUGpkz4VsvZ6XoOR7ZQwfqaLBMYC/HUmv7 TzuwbGzbKiz1ybpl5R4bOxs6CX/RA8IVla+zMp+wZwCqQ161eMmqnDN3GMXvMJRlTXwu f3jYs67xI8B/BACxf8OzE1azyD1PCj7PMm0UvMLNsoO1Qc0M/W/+Rsbj8pFDleqAgcJ/ 5gow== X-Gm-Message-State: AOAM530wXuzRU63UII2Oh2LARHatuZVOp8WFl6lG1vAm1DxzDb1MZXuA HZLV1UEMHsJWIliZ3i338//InA== X-Google-Smtp-Source: ABdhPJwkFTiVZzs20BY0iNb/uWPZKfXW1+7yJ3If8vm7ZCRfBMyjSzk19Oyt3mdhgSVV6jkkSD7HgQ== X-Received: by 2002:adf:ffc8:: with SMTP id x8mr4059997wrs.158.1610118162008; Fri, 08 Jan 2021 07:02:42 -0800 (PST) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id s13sm14258464wra.53.2021.01.08.07.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Jan 2021 07:02:41 -0800 (PST) From: Jean-Philippe Brucker To: joro@8bytes.org, will@kernel.org Cc: lorenzo.pieralisi@arm.com, robh+dt@kernel.org, guohanjun@huawei.com, sudeep.holla@arm.com, rjw@rjwysocki.net, lenb@kernel.org, robin.murphy@arm.com, Jonathan.Cameron@huawei.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-accelerators@lists.ozlabs.org, baolu.lu@linux.intel.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, vivek.gautam@arm.com, Jean-Philippe Brucker Subject: [PATCH v9 10/10] iommu/arm-smmu-v3: Add stall support for platform devices Date: Fri, 8 Jan 2021 15:52:18 +0100 Message-Id: <20210108145217.2254447-11-jean-philippe@linaro.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210108145217.2254447-1-jean-philippe@linaro.org> References: <20210108145217.2254447-1-jean-philippe@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The SMMU provides a Stall model for handling page faults in platform devices. It is similar to PCIe PRI, but doesn't require devices to have their own translation cache. Instead, faulting transactions are parked and the OS is given a chance to fix the page tables and retry the transaction. Enable stall for devices that support it (opt-in by firmware). When an event corresponds to a translation error, call the IOMMU fault handler. If the fault is recoverable, it will call us back to terminate or continue the stall. To use stall device drivers need to enable IOMMU_DEV_FEAT_IOPF, which initializes the fault queue for the device. Signed-off-by: Jean-Philippe Brucker --- v9: Add IOMMU_DEV_FEAT_IOPF --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 61 ++++++ .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 70 ++++++- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 192 ++++++++++++++++-- 3 files changed, 306 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 8ef6a1c48635..cb129870ef55 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -354,6 +354,13 @@ #define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) #define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) +#define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) +#define CMDQ_RESUME_0_RESP_TERM 0UL +#define CMDQ_RESUME_0_RESP_RETRY 1UL +#define CMDQ_RESUME_0_RESP_ABORT 2UL +#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12) +#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0) + #define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12) #define CMDQ_SYNC_0_CS_NONE 0 #define CMDQ_SYNC_0_CS_IRQ 1 @@ -370,6 +377,25 @@ #define EVTQ_0_ID GENMASK_ULL(7, 0) +#define EVT_ID_TRANSLATION_FAULT 0x10 +#define EVT_ID_ADDR_SIZE_FAULT 0x11 +#define EVT_ID_ACCESS_FAULT 0x12 +#define EVT_ID_PERMISSION_FAULT 0x13 + +#define EVTQ_0_SSV (1UL << 11) +#define EVTQ_0_SSID GENMASK_ULL(31, 12) +#define EVTQ_0_SID GENMASK_ULL(63, 32) +#define EVTQ_1_STAG GENMASK_ULL(15, 0) +#define EVTQ_1_STALL (1UL << 31) +#define EVTQ_1_PRIV (1UL << 33) +#define EVTQ_1_EXEC (1UL << 34) +#define EVTQ_1_READ (1UL << 35) +#define EVTQ_1_S2 (1UL << 39) +#define EVTQ_1_CLASS GENMASK_ULL(41, 40) +#define EVTQ_1_TT_READ (1UL << 44) +#define EVTQ_2_ADDR GENMASK_ULL(63, 0) +#define EVTQ_3_IPA GENMASK_ULL(51, 12) + /* PRI queue */ #define PRIQ_ENT_SZ_SHIFT 4 #define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) @@ -462,6 +488,13 @@ struct arm_smmu_cmdq_ent { enum pri_resp resp; } pri; + #define CMDQ_OP_RESUME 0x44 + struct { + u32 sid; + u16 stag; + u8 resp; + } resume; + #define CMDQ_OP_CMD_SYNC 0x46 struct { u64 msiaddr; @@ -520,6 +553,7 @@ struct arm_smmu_cmdq_batch { struct arm_smmu_evtq { struct arm_smmu_queue q; + struct iopf_queue *iopf; u32 max_stalls; }; @@ -656,7 +690,9 @@ struct arm_smmu_master { struct arm_smmu_stream *streams; unsigned int num_streams; bool ats_enabled; + bool stall_enabled; bool sva_enabled; + bool iopf_enabled; struct list_head bonds; unsigned int ssid_bits; }; @@ -675,6 +711,7 @@ struct arm_smmu_domain { struct io_pgtable_ops *pgtbl_ops; bool non_strict; + bool stall_enabled; atomic_t nr_ats_masters; enum arm_smmu_domain_stage stage; @@ -713,6 +750,10 @@ bool arm_smmu_master_sva_supported(struct arm_smmu_master *master); bool arm_smmu_master_sva_enabled(struct arm_smmu_master *master); int arm_smmu_master_enable_sva(struct arm_smmu_master *master); int arm_smmu_master_disable_sva(struct arm_smmu_master *master); +bool arm_smmu_master_iopf_supported(struct arm_smmu_master *master); +bool arm_smmu_master_iopf_enabled(struct arm_smmu_master *master); +int arm_smmu_master_enable_iopf(struct arm_smmu_master *master); +int arm_smmu_master_disable_iopf(struct arm_smmu_master *master); struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void *drvdata); void arm_smmu_sva_unbind(struct iommu_sva *handle); @@ -744,6 +785,26 @@ static inline int arm_smmu_master_disable_sva(struct arm_smmu_master *master) return -ENODEV; } +static inline bool arm_smmu_master_iopf_supported(struct arm_smmu_master *master) +{ + return false; +} + +static inline bool arm_smmu_master_iopf_enabled(struct arm_smmu_master *master) +{ + return false; +} + +static inline int arm_smmu_master_enable_iopf(struct arm_smmu_master *master) +{ + return -ENODEV; +} + +static inline int arm_smmu_master_disable_iopf(struct arm_smmu_master *master) +{ + return -ENODEV; +} + static inline struct iommu_sva * arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void *drvdata) { diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index e13b092e6004..17acfee4f484 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -431,9 +431,9 @@ bool arm_smmu_sva_supported(struct arm_smmu_device *smmu) return true; } -static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) +bool arm_smmu_master_iopf_supported(struct arm_smmu_master *master) { - return false; + return master->stall_enabled; } bool arm_smmu_master_sva_supported(struct arm_smmu_master *master) @@ -441,8 +441,18 @@ bool arm_smmu_master_sva_supported(struct arm_smmu_master *master) if (!(master->smmu->features & ARM_SMMU_FEAT_SVA)) return false; - /* SSID and IOPF support are mandatory for the moment */ - return master->ssid_bits && arm_smmu_iopf_supported(master); + /* SSID support is mandatory for the moment */ + return master->ssid_bits; +} + +bool arm_smmu_master_iopf_enabled(struct arm_smmu_master *master) +{ + bool enabled; + + mutex_lock(&sva_lock); + enabled = master->iopf_enabled; + mutex_unlock(&sva_lock); + return enabled; } bool arm_smmu_master_sva_enabled(struct arm_smmu_master *master) @@ -455,15 +465,67 @@ bool arm_smmu_master_sva_enabled(struct arm_smmu_master *master) return enabled; } +int arm_smmu_master_enable_iopf(struct arm_smmu_master *master) +{ + int ret; + struct device *dev = master->dev; + + mutex_lock(&sva_lock); + if (master->stall_enabled) { + ret = iopf_queue_add_device(master->smmu->evtq.iopf, dev); + if (ret) + goto err_unlock; + } + + ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev); + if (ret) + goto err_remove_device; + master->iopf_enabled = true; + mutex_unlock(&sva_lock); + return 0; + +err_remove_device: + iopf_queue_remove_device(master->smmu->evtq.iopf, dev); +err_unlock: + mutex_unlock(&sva_lock); + return ret; +} + int arm_smmu_master_enable_sva(struct arm_smmu_master *master) { mutex_lock(&sva_lock); + /* + * Drivers for devices supporting PRI or stall should enable IOPF first. + * Others have device-specific fault handlers and don't need IOPF, so + * this sanity check is a bit basic. + */ + if (arm_smmu_master_iopf_supported(master) && !master->iopf_enabled) { + mutex_unlock(&sva_lock); + return -EINVAL; + } master->sva_enabled = true; mutex_unlock(&sva_lock); return 0; } +int arm_smmu_master_disable_iopf(struct arm_smmu_master *master) +{ + struct device *dev = master->dev; + + mutex_lock(&sva_lock); + if (master->sva_enabled) { + mutex_unlock(&sva_lock); + return -EBUSY; + } + + iommu_unregister_device_fault_handler(dev); + iopf_queue_remove_device(master->smmu->evtq.iopf, dev); + master->iopf_enabled = false; + mutex_unlock(&sva_lock); + return 0; +} + int arm_smmu_master_disable_sva(struct arm_smmu_master *master) { mutex_lock(&sva_lock); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 2dbae2e6965d..1fea11d65cd3 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -32,6 +32,7 @@ #include #include "arm-smmu-v3.h" +#include "../../iommu-sva-lib.h" static bool disable_bypass = true; module_param(disable_bypass, bool, 0444); @@ -319,6 +320,11 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) } cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp); break; + case CMDQ_OP_RESUME: + cmd[0] |= FIELD_PREP(CMDQ_RESUME_0_SID, ent->resume.sid); + cmd[0] |= FIELD_PREP(CMDQ_RESUME_0_RESP, ent->resume.resp); + cmd[1] |= FIELD_PREP(CMDQ_RESUME_1_STAG, ent->resume.stag); + break; case CMDQ_OP_CMD_SYNC: if (ent->sync.msiaddr) { cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); @@ -882,6 +888,44 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true); } +static int arm_smmu_page_response(struct device *dev, + struct iommu_fault_event *unused, + struct iommu_page_response *resp) +{ + struct arm_smmu_cmdq_ent cmd = {0}; + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + int sid = master->streams[0].id; + + if (master->stall_enabled) { + cmd.opcode = CMDQ_OP_RESUME; + cmd.resume.sid = sid; + cmd.resume.stag = resp->grpid; + switch (resp->code) { + case IOMMU_PAGE_RESP_INVALID: + case IOMMU_PAGE_RESP_FAILURE: + cmd.resume.resp = CMDQ_RESUME_0_RESP_ABORT; + break; + case IOMMU_PAGE_RESP_SUCCESS: + cmd.resume.resp = CMDQ_RESUME_0_RESP_RETRY; + break; + default: + return -EINVAL; + } + } else { + return -ENODEV; + } + + arm_smmu_cmdq_issue_cmd(master->smmu, &cmd); + /* + * Don't send a SYNC, it doesn't do anything for RESUME or PRI_RESP. + * RESUME consumption guarantees that the stalled transaction will be + * terminated... at some point in the future. PRI_RESP is fire and + * forget. + */ + + return 0; +} + /* Context descriptor manipulation functions */ void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) { @@ -991,7 +1035,6 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, u64 val; bool cd_live; __le64 *cdptr; - struct arm_smmu_device *smmu = smmu_domain->smmu; if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax))) return -E2BIG; @@ -1036,8 +1079,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | CTXDESC_CD_0_V; - /* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */ - if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE) + if (smmu_domain->stall_enabled) val |= CTXDESC_CD_0_S; } @@ -1278,7 +1320,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1)); if (smmu->features & ARM_SMMU_FEAT_STALLS && - !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) + !master->stall_enabled) dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD); val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) | @@ -1355,7 +1397,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } -__maybe_unused static struct arm_smmu_master * arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) { @@ -1382,9 +1423,96 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) } /* IRQ and event handlers */ +static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) +{ + int ret; + u32 perm = 0; + struct arm_smmu_master *master; + bool ssid_valid = evt[0] & EVTQ_0_SSV; + u8 type = FIELD_GET(EVTQ_0_ID, evt[0]); + u32 sid = FIELD_GET(EVTQ_0_SID, evt[0]); + struct iommu_fault_event fault_evt = { }; + struct iommu_fault *flt = &fault_evt.fault; + + /* Stage-2 is always pinned at the moment */ + if (evt[1] & EVTQ_1_S2) + return -EFAULT; + + master = arm_smmu_find_master(smmu, sid); + if (!master) + return -EINVAL; + + if (evt[1] & EVTQ_1_READ) + perm |= IOMMU_FAULT_PERM_READ; + else + perm |= IOMMU_FAULT_PERM_WRITE; + + if (evt[1] & EVTQ_1_EXEC) + perm |= IOMMU_FAULT_PERM_EXEC; + + if (evt[1] & EVTQ_1_PRIV) + perm |= IOMMU_FAULT_PERM_PRIV; + + if (evt[1] & EVTQ_1_STALL) { + flt->type = IOMMU_FAULT_PAGE_REQ; + flt->prm = (struct iommu_fault_page_request) { + .flags = IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, + .grpid = FIELD_GET(EVTQ_1_STAG, evt[1]), + .perm = perm, + .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), + }; + + if (ssid_valid) { + flt->prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + flt->prm.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); + } + } else { + flt->type = IOMMU_FAULT_DMA_UNRECOV; + flt->event = (struct iommu_fault_unrecoverable) { + .flags = IOMMU_FAULT_UNRECOV_ADDR_VALID | + IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID, + .perm = perm, + .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), + .fetch_addr = FIELD_GET(EVTQ_3_IPA, evt[3]), + }; + + if (ssid_valid) { + flt->event.flags |= IOMMU_FAULT_UNRECOV_PASID_VALID; + flt->event.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); + } + + switch (type) { + case EVT_ID_TRANSLATION_FAULT: + case EVT_ID_ADDR_SIZE_FAULT: + case EVT_ID_ACCESS_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_PTE_FETCH; + break; + case EVT_ID_PERMISSION_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_PERMISSION; + break; + default: + /* TODO: report other unrecoverable faults. */ + return -EFAULT; + } + } + + ret = iommu_report_device_fault(master->dev, &fault_evt); + if (ret && flt->type == IOMMU_FAULT_PAGE_REQ) { + /* Nobody cared, abort the access */ + struct iommu_page_response resp = { + .pasid = flt->prm.pasid, + .grpid = flt->prm.grpid, + .code = IOMMU_PAGE_RESP_FAILURE, + }; + arm_smmu_page_response(master->dev, NULL, &resp); + } + + return ret; +} + static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { - int i; + int i, ret; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->evtq.q; struct arm_smmu_ll_queue *llq = &q->llq; @@ -1394,11 +1522,14 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) while (!queue_remove_raw(q, evt)) { u8 id = FIELD_GET(EVTQ_0_ID, evt[0]); - dev_info(smmu->dev, "event 0x%02x received:\n", id); - for (i = 0; i < ARRAY_SIZE(evt); ++i) - dev_info(smmu->dev, "\t0x%016llx\n", - (unsigned long long)evt[i]); - + ret = arm_smmu_handle_evt(smmu, evt); + if (ret) { + dev_info(smmu->dev, "event 0x%02x received:\n", + id); + for (i = 0; i < ARRAY_SIZE(evt); ++i) + dev_info(smmu->dev, "\t0x%016llx\n", + (unsigned long long)evt[i]); + } } /* @@ -1903,6 +2034,8 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, cfg->s1cdmax = master->ssid_bits; + smmu_domain->stall_enabled = master->stall_enabled; + ret = arm_smmu_alloc_cd_tables(smmu_domain); if (ret) goto out_free_asid; @@ -2250,6 +2383,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) smmu_domain->s1_cfg.s1cdmax, master->ssid_bits); ret = -EINVAL; goto out_unlock; + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 && + smmu_domain->stall_enabled != master->stall_enabled) { + dev_err(dev, "cannot attach to stall-%s domain\n", + smmu_domain->stall_enabled ? "enabled" : "disabled"); + ret = -EINVAL; + goto out_unlock; } master->domain = smmu_domain; @@ -2484,6 +2623,11 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) master->ssid_bits = min_t(u8, master->ssid_bits, CTXDESC_LINEAR_CDMAX); + if ((smmu->features & ARM_SMMU_FEAT_STALLS && + device_property_read_bool(dev, "dma-can-stall")) || + smmu->features & ARM_SMMU_FEAT_STALL_FORCE) + master->stall_enabled = true; + return &smmu->iommu; err_free_master: @@ -2502,6 +2646,7 @@ static void arm_smmu_release_device(struct device *dev) master = dev_iommu_priv_get(dev); WARN_ON(arm_smmu_master_sva_enabled(master)); + iopf_queue_remove_device(master->smmu->evtq.iopf, dev); arm_smmu_detach_dev(master); arm_smmu_disable_pasid(master); arm_smmu_remove_master(master); @@ -2629,6 +2774,8 @@ static bool arm_smmu_dev_has_feature(struct device *dev, return false; switch (feat) { + case IOMMU_DEV_FEAT_IOPF: + return arm_smmu_master_iopf_supported(master); case IOMMU_DEV_FEAT_SVA: return arm_smmu_master_sva_supported(master); default: @@ -2645,6 +2792,8 @@ static bool arm_smmu_dev_feature_enabled(struct device *dev, return false; switch (feat) { + case IOMMU_DEV_FEAT_IOPF: + return arm_smmu_master_iopf_enabled(master); case IOMMU_DEV_FEAT_SVA: return arm_smmu_master_sva_enabled(master); default: @@ -2655,6 +2804,8 @@ static bool arm_smmu_dev_feature_enabled(struct device *dev, static int arm_smmu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat) { + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + if (!arm_smmu_dev_has_feature(dev, feat)) return -ENODEV; @@ -2662,8 +2813,10 @@ static int arm_smmu_dev_enable_feature(struct device *dev, return -EBUSY; switch (feat) { + case IOMMU_DEV_FEAT_IOPF: + return arm_smmu_master_enable_iopf(master); case IOMMU_DEV_FEAT_SVA: - return arm_smmu_master_enable_sva(dev_iommu_priv_get(dev)); + return arm_smmu_master_enable_sva(master); default: return -EINVAL; } @@ -2672,12 +2825,16 @@ static int arm_smmu_dev_enable_feature(struct device *dev, static int arm_smmu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat) { + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + if (!arm_smmu_dev_feature_enabled(dev, feat)) return -EINVAL; switch (feat) { + case IOMMU_DEV_FEAT_IOPF: + return arm_smmu_master_disable_iopf(master); case IOMMU_DEV_FEAT_SVA: - return arm_smmu_master_disable_sva(dev_iommu_priv_get(dev)); + return arm_smmu_master_disable_sva(master); default: return -EINVAL; } @@ -2708,6 +2865,7 @@ static struct iommu_ops arm_smmu_ops = { .sva_bind = arm_smmu_sva_bind, .sva_unbind = arm_smmu_sva_unbind, .sva_get_pasid = arm_smmu_sva_get_pasid, + .page_response = arm_smmu_page_response, .pgsize_bitmap = -1UL, /* Restricted during device attach */ }; @@ -2785,6 +2943,7 @@ static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu) static int arm_smmu_init_queues(struct arm_smmu_device *smmu) { int ret; + bool sva = arm_smmu_sva_supported(smmu); /* cmdq */ ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD, @@ -2804,6 +2963,12 @@ static int arm_smmu_init_queues(struct arm_smmu_device *smmu) if (ret) return ret; + if (sva && smmu->features & ARM_SMMU_FEAT_STALLS) { + smmu->evtq.iopf = iopf_queue_alloc(dev_name(smmu->dev)); + if (!smmu->evtq.iopf) + return -ENOMEM; + } + /* priq */ if (!(smmu->features & ARM_SMMU_FEAT_PRI)) return 0; @@ -3718,6 +3883,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev) iommu_device_unregister(&smmu->iommu); iommu_device_sysfs_remove(&smmu->iommu); arm_smmu_device_disable(smmu); + iopf_queue_free(smmu->evtq.iopf); return 0; }