From patchwork Tue Apr 14 17:02:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDCB8C38A2C for ; Tue, 14 Apr 2020 17:04:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C55CE20678 for ; Tue, 14 Apr 2020 17:04:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="uLXgT3OH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391316AbgDNRE1 (ORCPT ); Tue, 14 Apr 2020 13:04:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391194AbgDNREX (ORCPT ); Tue, 14 Apr 2020 13:04:23 -0400 Received: from mail-wm1-x344.google.com (mail-wm1-x344.google.com [IPv6:2a00:1450:4864:20::344]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B127C061A0C for ; Tue, 14 Apr 2020 10:04:23 -0700 (PDT) Received: by mail-wm1-x344.google.com with SMTP id o81so8229385wmo.2 for ; Tue, 14 Apr 2020 10:04:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S/58K7cBrS1YFykN7VmQq2KvNUeWsC8Kj3y4e3rsJUw=; b=uLXgT3OHXzr7WIP4auqQ0pUv8jB2Gu0FXs++D/iW1VZEOlUBZmMat/59hNvuKXUHVV cj4phAFkT9jzp8HztcrX5t+2TAb4qA0LjGQEUWnFH8gjmt+8YlIN8lTszPCjIHtMgaDl +RsNTiX9lWpFt0xvxuj1Y7/EnMK2Tgn3uAczFgsSbHnCwz9xf3C5PfaBTA7IAgjwJjMZ dR5jWTfHyCsIkCvhMYaacBFfAwrri6XZ9ck7gAnzv0Q56dBwvd8l8DQmZyHxVe0dbawr eOvgpmKohhfl5I9Lw9Cwarrb0fFLaC9G1aeQ5gdGvcNxYT/GGUj9hJxnnVNK9OriRqMh ARJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S/58K7cBrS1YFykN7VmQq2KvNUeWsC8Kj3y4e3rsJUw=; b=FKxwm6LQxbx054tpaNKC+NxQ+PfjMuLPXb1sgV0VlKpT93ic5I6/riL6/b6oTNqaPj MrEOTy1aoLPKNLCVdWzD/1C5AbbBItTitahQE0uyYMMCoJojRqW0SzzIGLJQjL/tfuYL 8+dQ56xeiKGOqNonFJa+mPSSYezPe2AyXS+YgA/6VcoB0fXIfYAmd+SvIQOk07Ic+762 uut0bAakgb/gk6hqhEN965sDcnHFoE/7K4dbJzD8H9pFEzBwSWCeIiHiieyjmVoUYQuu bO7etAjd88xoMNd2nMp1z7fi1B30M9aio10Rn++sh19K1ESbdikuW6EwNGcQlWnQX5G9 Sdkg== X-Gm-Message-State: AGi0Pub8O1B1L3eIY8PZ7FlpJXDybOoZF/pVRAy2YeDmB0+gK/mdHIKc tJM+LiRCSJ+T1ziNeeAg2d+gvg== X-Google-Smtp-Source: APiQypI0vgAsGsElkb0YGq38HI94/U7rQoGenBTR1uCdMWwGoH/G4ZuBZFDfVeGMR2g5eqEKJcObhQ== X-Received: by 2002:a1c:2506:: with SMTP id l6mr784284wml.44.1586883861665; Tue, 14 Apr 2020 10:04:21 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:21 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 03/25] iommu: Add a page fault handler Date: Tue, 14 Apr 2020 19:02:31 +0200 Message-Id: <20200414170252.714402-4-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCI PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was recently added to the IOMMU core. Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, supposedly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). When the handler succeeded, the IOMMU retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Signed-off-by: Jean-Philippe Brucker --- v4->v5: Fix 'busy' refcount --- drivers/iommu/Kconfig | 4 + drivers/iommu/Makefile | 1 + include/linux/iommu.h | 60 +++++ drivers/iommu/io-pgfault.c | 452 +++++++++++++++++++++++++++++++++++++ 4 files changed, 517 insertions(+) create mode 100644 drivers/iommu/io-pgfault.c diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index e81842f59b037..bf620bf48da03 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -109,6 +109,10 @@ config IOMMU_SVA select IOMMU_API select MMU_NOTIFIER +config IOMMU_PAGE_FAULT + bool + select IOMMU_API + config FSL_PAMU bool "Freescale IOMMU support" depends on PCI diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 40c800dd4e3ef..bf5cb4ee84093 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o obj-$(CONFIG_IOMMU_DEBUGFS) += iommu-debugfs.o obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o +obj-$(CONFIG_IOMMU_PAGE_FAULT) += io-pgfault.o obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 167e468dd3510..5a3d092c2568a 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -343,12 +343,21 @@ struct iommu_fault_param { struct mutex lock; }; +/** + * iopf_queue_flush_t - Flush low-level page fault queue + * + * Report all faults currently pending in the low-level page fault queue + */ +struct iopf_queue; +typedef int (*iopf_queue_flush_t)(void *cookie, struct device *dev, int pasid); + /** * struct dev_iommu - Collection of per-device IOMMU data * * @fault_param: IOMMU detected device fault reporting data * @sva_param: IOMMU parameter for SVA * @sva_lock: protects @sva_param + * @iopf_param: I/O Page Fault queue and data * @fwspec: IOMMU fwspec data * @priv: IOMMU Driver private data * @@ -360,6 +369,7 @@ struct dev_iommu { struct iommu_fault_param *fault_param; struct iommu_sva_param *sva_param; struct mutex sva_lock; + struct iopf_device_param *iopf_param; struct iommu_fwspec *fwspec; void *priv; }; @@ -1071,4 +1081,54 @@ void iommu_debugfs_setup(void); static inline void iommu_debugfs_setup(void) {} #endif +#ifdef CONFIG_IOMMU_PAGE_FAULT +extern int iommu_queue_iopf(struct iommu_fault *fault, void *cookie); + +extern int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); +extern int iopf_queue_remove_device(struct iopf_queue *queue, + struct device *dev); +extern int iopf_queue_flush_dev(struct device *dev, int pasid); +extern struct iopf_queue * +iopf_queue_alloc(const char *name, iopf_queue_flush_t flush, void *cookie); +extern void iopf_queue_free(struct iopf_queue *queue); +extern int iopf_queue_discard_partial(struct iopf_queue *queue); +#else /* CONFIG_IOMMU_PAGE_FAULT */ +static inline int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +{ + return -ENODEV; +} + +static inline int iopf_queue_add_device(struct iopf_queue *queue, + struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_remove_device(struct iopf_queue *queue, + struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_flush_dev(struct device *dev, int pasid) +{ + return -ENODEV; +} + +static inline struct iopf_queue * +iopf_queue_alloc(const char *name, iopf_queue_flush_t flush, void *cookie) +{ + return NULL; +} + +static inline void iopf_queue_free(struct iopf_queue *queue) +{ +} + +static inline int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + return -ENODEV; +} +#endif /* CONFIG_IOMMU_PAGE_FAULT */ + #endif /* __LINUX_IOMMU_H */ diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c new file mode 100644 index 0000000000000..5bba8e6a13be2 --- /dev/null +++ b/drivers/iommu/io-pgfault.c @@ -0,0 +1,452 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Handle device page faults + * + * Copyright (C) 2020 ARM Ltd. + */ + +#include +#include +#include +#include + +/** + * struct iopf_queue - IO Page Fault queue + * @wq: the fault workqueue + * @flush: low-level flush callback + * @flush_arg: flush() argument + * @devices: devices attached to this queue + * @lock: protects the device list + */ +struct iopf_queue { + struct workqueue_struct *wq; + iopf_queue_flush_t flush; + void *flush_arg; + struct list_head devices; + struct mutex lock; +}; + +/** + * struct iopf_device_param - IO Page Fault data attached to a device + * @dev: the device that owns this param + * @queue: IOPF queue + * @queue_list: index into queue->devices + * @partial: faults that are part of a Page Request Group for which the last + * request hasn't been submitted yet. + * @busy: the param is being used + * @wq_head: signal a change to @busy + */ +struct iopf_device_param { + struct device *dev; + struct iopf_queue *queue; + struct list_head queue_list; + struct list_head partial; + unsigned long busy; + wait_queue_head_t wq_head; +}; + +struct iopf_fault { + struct iommu_fault fault; + struct list_head head; +}; + +struct iopf_group { + struct iopf_fault last_fault; + struct list_head faults; + struct work_struct work; + struct device *dev; +}; + +static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, + enum iommu_page_response_code status) +{ + struct iommu_page_response resp = { + .version = IOMMU_PAGE_RESP_VERSION_1, + .pasid = iopf->fault.prm.pasid, + .grpid = iopf->fault.prm.grpid, + .code = status, + }; + + if (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) + resp.flags = IOMMU_PAGE_RESP_PASID_VALID; + + return iommu_page_response(dev, &resp); +} + +static enum iommu_page_response_code +iopf_handle_single(struct iopf_fault *iopf) +{ + /* TODO */ + return -ENODEV; +} + +static void iopf_handle_group(struct work_struct *work) +{ + struct iopf_group *group; + struct iopf_fault *iopf, *next; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; + + group = container_of(work, struct iopf_group, work); + + list_for_each_entry_safe(iopf, next, &group->faults, head) { + /* + * For the moment, errors are sticky: don't handle subsequent + * faults in the group if there is an error. + */ + if (status == IOMMU_PAGE_RESP_SUCCESS) + status = iopf_handle_single(iopf); + + if (!(iopf->fault.prm.flags & + IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) + kfree(iopf); + } + + iopf_complete_group(group->dev, &group->last_fault, status); + kfree(group); +} + +/** + * iommu_queue_iopf - IO Page Fault handler + * @evt: fault event + * @cookie: struct device, passed to iommu_register_device_fault_handler. + * + * Add a fault to the device workqueue, to be handled by mm. + * + * Return: 0 on success and <0 on error. + */ +int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +{ + int ret; + struct iopf_group *group; + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + + struct device *dev = cookie; + struct dev_iommu *param = dev->iommu; + + lockdep_assert_held(¶m->lock); + + if (fault->type != IOMMU_FAULT_PAGE_REQ) + /* Not a recoverable page fault */ + return -EOPNOTSUPP; + + /* + * As long as we're holding param->lock, the queue can't be unlinked + * from the device and therefore cannot disappear. + */ + iopf_param = param->iopf_param; + if (!iopf_param) + return -ENODEV; + + if (!(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { + iopf = kzalloc(sizeof(*iopf), GFP_KERNEL); + if (!iopf) + return -ENOMEM; + + iopf->fault = *fault; + + /* Non-last request of a group. Postpone until the last one */ + list_add(&iopf->head, &iopf_param->partial); + + return 0; + } + + group = kzalloc(sizeof(*group), GFP_KERNEL); + if (!group) { + /* + * The caller will send a response to the hardware. But we do + * need to clean up before leaving, otherwise partial faults + * will be stuck. + */ + ret = -ENOMEM; + goto cleanup_partial; + } + + group->dev = dev; + group->last_fault.fault = *fault; + INIT_LIST_HEAD(&group->faults); + list_add(&group->last_fault.head, &group->faults); + INIT_WORK(&group->work, iopf_handle_group); + + /* See if we have partial faults for this group */ + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) { + if (iopf->fault.prm.grpid == fault->prm.grpid) + /* Insert *before* the last fault */ + list_move(&iopf->head, &group->faults); + } + + queue_work(iopf_param->queue->wq, &group->work); + return 0; + +cleanup_partial: + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) { + if (iopf->fault.prm.grpid == fault->prm.grpid) { + list_del(&iopf->head); + kfree(iopf); + } + } + return ret; +} +EXPORT_SYMBOL_GPL(iommu_queue_iopf); + +/** + * iopf_queue_flush_dev - Ensure that all queued faults have been processed + * @dev: the endpoint whose faults need to be flushed. + * @pasid: the PASID affected by this flush + * + * Users must call this function when releasing a PASID, to ensure that all + * pending faults for this PASID have been handled, and won't hit the address + * space of the next process that uses this PASID. + * + * This function can also be called before shutting down the device, in which + * case @pasid should be IOMMU_PASID_INVALID. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_flush_dev(struct device *dev, int pasid) +{ + int ret = 0; + struct iopf_queue *queue; + struct iopf_device_param *iopf_param; + struct dev_iommu *param = dev->iommu; + + if (!param) + return -ENODEV; + + /* + * It is incredibly easy to find ourselves in a deadlock situation if + * we're not careful, because we're taking the opposite path as + * iommu_queue_iopf: + * + * iopf_queue_flush_dev() | PRI queue handler + * lock(¶m->lock) | iommu_queue_iopf() + * queue->flush() | lock(¶m->lock) + * wait PRI queue empty | + * + * So we can't hold the device param lock while flushing. Take a + * reference to the device param instead, to prevent the queue from + * going away. + */ + mutex_lock(¶m->lock); + iopf_param = param->iopf_param; + if (iopf_param) { + queue = param->iopf_param->queue; + iopf_param->busy++; + } else { + ret = -ENODEV; + } + mutex_unlock(¶m->lock); + if (ret) + return ret; + + /* + * When removing a PASID, the device driver tells the device to stop + * using it, and flush any pending fault to the IOMMU. In this flush + * callback, the IOMMU driver makes sure that there are no such faults + * left in the low-level queue. + */ + queue->flush(queue->flush_arg, dev, pasid); + + flush_workqueue(queue->wq); + + mutex_lock(¶m->lock); + iopf_param->busy--; + wake_up(&iopf_param->wq_head); + mutex_unlock(¶m->lock); + + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); + +/** + * iopf_queue_discard_partial - Remove all pending partial fault + * @queue: the queue whose partial faults need to be discarded + * + * When the hardware queue overflows, last page faults in a group may have been + * lost and the IOMMU driver calls this to discard all partial faults. The + * driver shouldn't be adding new faults to this queue concurrently. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + + if (!queue) + return -EINVAL; + + mutex_lock(&queue->lock); + list_for_each_entry(iopf_param, &queue->devices, queue_list) { + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) + kfree(iopf); + } + mutex_unlock(&queue->lock); + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_discard_partial); + +/** + * iopf_queue_add_device - Add producer to the fault queue + * @queue: IOPF queue + * @dev: device to add + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) +{ + int ret = -EINVAL; + struct iopf_device_param *iopf_param; + struct dev_iommu *param = dev->iommu; + + if (!param) + return -ENODEV; + + iopf_param = kzalloc(sizeof(*iopf_param), GFP_KERNEL); + if (!iopf_param) + return -ENOMEM; + + INIT_LIST_HEAD(&iopf_param->partial); + iopf_param->queue = queue; + iopf_param->dev = dev; + init_waitqueue_head(&iopf_param->wq_head); + + mutex_lock(&queue->lock); + mutex_lock(¶m->lock); + if (!param->iopf_param) { + list_add(&iopf_param->queue_list, &queue->devices); + param->iopf_param = iopf_param; + ret = 0; + } + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); + + if (ret) + kfree(iopf_param); + + return ret; +} +EXPORT_SYMBOL_GPL(iopf_queue_add_device); + +/** + * iopf_queue_remove_device - Remove producer from fault queue + * @queue: IOPF queue + * @dev: device to remove + * + * Caller makes sure that no more faults are reported for this device. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) +{ + bool busy; + int ret = 0; + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + struct dev_iommu *param = dev->iommu; + + if (!param || !queue) + return -EINVAL; + + do { + mutex_lock(&queue->lock); + mutex_lock(¶m->lock); + iopf_param = param->iopf_param; + if (iopf_param && iopf_param->queue == queue) { + busy = iopf_param->busy; + if (!busy) { + list_del(&iopf_param->queue_list); + param->iopf_param = NULL; + } + } else { + ret = -EINVAL; + } + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); + if (ret) + return ret; + + /* + * If there is an ongoing flush, wait for it to complete and + * then retry. iopf_param isn't going away since we're the only + * thread that can free it. + */ + if (busy) + wait_event(iopf_param->wq_head, !iopf_param->busy); + } while (busy); + + /* Just in case some faults are still stuck */ + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) + kfree(iopf); + + kfree(iopf_param); + + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_remove_device); + +/** + * iopf_queue_alloc - Allocate and initialize a fault queue + * @name: a unique string identifying the queue (for workqueue) + * @flush: a callback that flushes the low-level queue + * @cookie: driver-private data passed to the flush callback + * + * The callback is called before the workqueue is flushed. The IOMMU driver must + * commit all faults that are pending in its low-level queues at the time of the + * call, into the IOPF queue (with iommu_report_device_fault). The callback + * takes a device pointer as argument, hinting what endpoint is causing the + * flush. When the device is NULL, all faults should be committed. + * + * Return: the queue on success and NULL on error. + */ +struct iopf_queue * +iopf_queue_alloc(const char *name, iopf_queue_flush_t flush, void *cookie) +{ + struct iopf_queue *queue; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + if (!queue) + return NULL; + + /* + * The WQ is unordered because the low-level handler enqueues faults by + * group. PRI requests within a group have to be ordered, but once + * that's dealt with, the high-level function can handle groups out of + * order. + */ + queue->wq = alloc_workqueue("iopf_queue/%s", WQ_UNBOUND, 0, name); + if (!queue->wq) { + kfree(queue); + return NULL; + } + + queue->flush = flush; + queue->flush_arg = cookie; + INIT_LIST_HEAD(&queue->devices); + mutex_init(&queue->lock); + + return queue; +} +EXPORT_SYMBOL_GPL(iopf_queue_alloc); + +/** + * iopf_queue_free - Free IOPF queue + * @queue: queue to free + * + * Counterpart to iopf_queue_alloc(). The driver must not be queuing faults or + * adding/removing devices on this queue anymore. + */ +void iopf_queue_free(struct iopf_queue *queue) +{ + struct iopf_device_param *iopf_param, *next; + + if (!queue) + return; + + list_for_each_entry_safe(iopf_param, next, &queue->devices, queue_list) + iopf_queue_remove_device(queue, iopf_param->dev); + + destroy_workqueue(queue->wq); + kfree(queue); +} +EXPORT_SYMBOL_GPL(iopf_queue_free); From patchwork Tue Apr 14 17:02:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202082 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63610C38A2C for ; Tue, 14 Apr 2020 17:04:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3EC252075E for ; Tue, 14 Apr 2020 17:04:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="GCakNvEs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391418AbgDNREo (ORCPT ); Tue, 14 Apr 2020 13:04:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391259AbgDNRE0 (ORCPT ); Tue, 14 Apr 2020 13:04:26 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D2DFC0610D5 for ; Tue, 14 Apr 2020 10:04:26 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id o81so8229546wmo.2 for ; Tue, 14 Apr 2020 10:04:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IomOGED94uQ/sc+mw27XI7cICJqvlx4xPsMfu0J0FYY=; b=GCakNvEsC05vdQy4s9DM45xHnKiEYteUAcD7FyLCP/uTecd/VeAwPmTWPNoushPNYV RNOeq6evl2T3CwdSNOB3ZpQ8cQguPZLXa9JzggKMQe0IU5xiemwq49jBFoG/HPGeV5jO t5dRmLK1E+Zg1wNi1JpDKCqwSf5Ft+LyS2I11G3Rk/AuaDJ9hliSq44RchEHC4khzI/7 FQ+xSbVGbSikxMid35s3BjzOuv/5EQZX04YWVmmNkddVLvKqNh1lYoDuSClEnEiZ8YjH NZ4PeV/mancwfAQfD0talcSAEJdsbDvnh3PrfKQsvpIiQI7cQ+G3rTKfZSYDXsO+GCmi Ua2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IomOGED94uQ/sc+mw27XI7cICJqvlx4xPsMfu0J0FYY=; b=bM+p4MhNweq1TeMB10L86IgDuKJ1uPNh0Gc5OiKAR/1VCwyKvjcyLBzA4npfehB+2z HtNEMjGWQrhwpNVsW5SoJxUJ1BYlI2phxmb2PKXWn8fZt6qbd0VpIE+vomA40HuAhbLV ZA8DtB0XTA7c2X4K2tmhXtqSAIh9Ciatv4EO/hUR2WWDZVDr4A5Y085q3L7EIsLFASp0 WIjP0X/MFee3h6G4kcSAgBgARtKIH4dVhBQ2s3ckUa/ZIpdrIJs59N9wvwNPzZhhWCmx Rd+9JmEljlkzV9iZcpYcUMV+UP8LbpMHCdVCgnssFqG19aWTCN3dWLoR4is3WTwEp6q8 UcHw== X-Gm-Message-State: AGi0PuZcJ0jK9/EOuqGcHdzQ6Ht7zkbD8Yraxb/n6/opK2XyEMgyjzBR cXnfh+jyHiNks0SZrgstcxiLYw== X-Google-Smtp-Source: APiQypKbiaMvR8TIzKGaRea1UGF3OINaumxymb+GpCM64htK1x8XN0yXUI9IxJx+k4Q82m6PfxgfWw== X-Received: by 2002:a7b:cc88:: with SMTP id p8mr776491wma.108.1586883864753; Tue, 14 Apr 2020 10:04:24 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:24 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 05/25] iommu/iopf: Handle mm faults Date: Tue, 14 Apr 2020 19:02:33 +0200 Message-Id: <20200414170252.714402-6-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When a recoverable page fault is handled by the fault workqueue, find the associated mm and call handle_mm_fault. Signed-off-by: Jean-Philippe Brucker --- v4->v5: no need to call mmput_async() anymore, since the MMU release() doesn't flush the IOPF queue anymore. --- drivers/iommu/io-pgfault.c | 77 +++++++++++++++++++++++++++++++++++++- 1 file changed, 75 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 5bba8e6a13be2..fd4244023b33f 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -76,8 +77,57 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, static enum iommu_page_response_code iopf_handle_single(struct iopf_fault *iopf) { - /* TODO */ - return -ENODEV; + vm_fault_t ret; + struct mm_struct *mm; + struct vm_area_struct *vma; + unsigned int access_flags = 0; + unsigned int fault_flags = FAULT_FLAG_REMOTE; + struct iommu_fault_page_request *prm = &iopf->fault.prm; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; + + if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) + return status; + + mm = iommu_sva_find(prm->pasid); + if (IS_ERR_OR_NULL(mm)) + return status; + + down_read(&mm->mmap_sem); + + vma = find_extend_vma(mm, prm->addr); + if (!vma) + /* Unmapped area */ + goto out_put_mm; + + if (prm->perm & IOMMU_FAULT_PERM_READ) + access_flags |= VM_READ; + + if (prm->perm & IOMMU_FAULT_PERM_WRITE) { + access_flags |= VM_WRITE; + fault_flags |= FAULT_FLAG_WRITE; + } + + if (prm->perm & IOMMU_FAULT_PERM_EXEC) { + access_flags |= VM_EXEC; + fault_flags |= FAULT_FLAG_INSTRUCTION; + } + + if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) + fault_flags |= FAULT_FLAG_USER; + + if (access_flags & ~vma->vm_flags) + /* Access fault */ + goto out_put_mm; + + ret = handle_mm_fault(vma, prm->addr, fault_flags); + status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : + IOMMU_PAGE_RESP_SUCCESS; + +out_put_mm: + up_read(&mm->mmap_sem); + mmput(mm); + + return status; } static void iopf_handle_group(struct work_struct *work) @@ -112,6 +162,29 @@ static void iopf_handle_group(struct work_struct *work) * * Add a fault to the device workqueue, to be handled by mm. * + * This module doesn't handle PCI PASID Stop Marker; IOMMU drivers must discard + * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't + * expect a response. It may be generated when disabling a PASID (issuing a + * PASID stop request) by some PCI devices. + * + * The PASID stop request is issued by the device driver before unbind(). Once + * it completes, no page request is generated for this PASID anymore and + * outstanding ones have been pushed to the IOMMU (as per PCIe 4.0r1.0 - 6.20.1 + * and 10.4.1.2 - Managing PASID TLP Prefix Usage). Some PCI devices will wait + * for all outstanding page requests to come back with a response before + * completing the PASID stop request. Others do not wait for page responses, and + * instead issue this Stop Marker that tells us when the PASID can be + * reallocated. + * + * It is safe to discard the Stop Marker because it is an optimization. + * a. Page requests, which are posted requests, have been flushed to the IOMMU + * when the stop request completes. + * b. We flush all fault queues on unbind() before freeing the PASID. + * + * So even though the Stop Marker might be issued by the device *after* the stop + * request completes, outstanding faults will have been dealt with by the time + * we free the PASID. + * * Return: 0 on success and <0 on error. */ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) From patchwork Tue Apr 14 17:02:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91CE3C54F70 for ; Tue, 14 Apr 2020 17:04:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72EB8206A2 for ; Tue, 14 Apr 2020 17:04:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="H/49gtTy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390896AbgDNREm (ORCPT ); Tue, 14 Apr 2020 13:04:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391179AbgDNRE1 (ORCPT ); Tue, 14 Apr 2020 13:04:27 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F5CAC061A0E for ; Tue, 14 Apr 2020 10:04:27 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id r26so14957962wmh.0 for ; Tue, 14 Apr 2020 10:04:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pm90KHnx8uK8AXrbNapri8S3uUJ3F4SFKQZ/TEPP7D0=; b=H/49gtTyBTSYl8Y/NsAtw8/g2zInRHKwlAM06dE15HfIZYVeRpp26xgT3c6zafASzi 7uDVin3Kh8IClWDwqrYvKeFnyEB6udiDwCzxx2XTV4flbL6HH0XG9E/oPAZMxmIEDG2M jnmEkxuf7GeDgwVmmZ4SS2Ai+L4FsU4qiAepIcf4xzuUgNOcUilebN8pD4B0N++nF8Vq Zp/DuX2F1sTJiHAzOXJwPFc6HbGiFHxiJnEcW/IlfA8fGtHRi7Np+xmoeh36HV8dpfll 4HfN46I/PT6emJf6bvhw3gp7qf7nU5c8gXH7dohtjLBTfv8wGTTZmGm7nQ70C7xIRVtp OYOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pm90KHnx8uK8AXrbNapri8S3uUJ3F4SFKQZ/TEPP7D0=; b=QCyUo/ng4EDOuuCRcQ0+CmuUqT34fOe+GES1jg2qyPqsX//ZRm83sEcw10Cf2siDbv oceEZFhpJnJjYkXnVg515z1ewvRQK6HUHfcQRy2I11qj9BqcHzjj8Hbqvdbk7UKidZ1l WVRuxuLKsPZstOgJOhB/0W0tEqfVHNNRpfekjURJjyIESXlhTLY14gM+cLPsbxLcLq6r gdnfujQKf2fLiD2Tp0ZcOlgvNYpM669Hnige/Z0/8jqVpZDGGXGbgoSE9GQJUyld3d8y eS56g1XWS3jJow9Pvl3FrsJula9FHxhTBakEpfdUOIDk9dqPwyjELJkOCJef+azsSQ0e Mw/w== X-Gm-Message-State: AGi0PuZyPDTlIc70sGNgdV54b0wDmtt6CCY1tYbHHcG2JW3WIH0qv7h2 RDyV9aMUmn9O3bdfoCG9ZPm74w== X-Google-Smtp-Source: APiQypJk/dl8CBfLNh5x7aucnzwvWIEs4TpxXFyGB0h6saUKXXv74fXgKyRLf7k//vQrLyri2cta5A== X-Received: by 2002:a1c:998b:: with SMTP id b133mr767731wme.65.1586883865987; Tue, 14 Apr 2020 10:04:25 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:25 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 06/25] iommu/sva: Register page fault handler Date: Tue, 14 Apr 2020 19:02:34 +0200 Message-Id: <20200414170252.714402-7-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When enabling SVA, register the fault handler. Device driver will register an I/O page fault queue before or after calling iommu_sva_enable. The fault queue must be flushed before any io_mm is freed, to make sure that its PASID isn't used in any fault queue, and can be reallocated. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Kconfig | 1 + drivers/iommu/iommu-sva.c | 11 +++++++++++ 2 files changed, 12 insertions(+) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index bf620bf48da03..411a7ee2ab12d 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -106,6 +106,7 @@ config IOMMU_DMA config IOMMU_SVA bool select IOASID + select IOMMU_PAGE_FAULT select IOMMU_API select MMU_NOTIFIER diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index b177d6cbf4fff..00d5e7e895e80 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -420,6 +420,12 @@ void iommu_sva_unbind_generic(struct iommu_sva *handle) if (WARN_ON(!param)) return; + /* + * Caller stopped the device from issuing PASIDs, now make sure they are + * out of the fault queue. + */ + iopf_queue_flush_dev(handle->dev, bond->io_mm->pasid); + mutex_lock(¶m->sva_lock); mutex_lock(&iommu_sva_lock); io_mm_detach(bond); @@ -457,6 +463,10 @@ int iommu_sva_enable(struct device *dev, struct iommu_sva_param *sva_param) goto err_unlock; } + ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev); + if (ret) + goto err_unlock; + dev->iommu->sva_param = new_param; mutex_unlock(¶m->sva_lock); return 0; @@ -494,6 +504,7 @@ int iommu_sva_disable(struct device *dev) goto out_unlock; } + iommu_unregister_device_fault_handler(dev); kfree(param->sva_param); param->sva_param = NULL; out_unlock: From patchwork Tue Apr 14 17:02:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21C7BC38A2C for ; Tue, 14 Apr 2020 17:04:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 005AD2078A for ; Tue, 14 Apr 2020 17:04:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="hSOw4eWI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391417AbgDNREh (ORCPT ); Tue, 14 Apr 2020 13:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391373AbgDNREc (ORCPT ); Tue, 14 Apr 2020 13:04:32 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90285C061A41 for ; Tue, 14 Apr 2020 10:04:32 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id e26so13824576wmk.5 for ; Tue, 14 Apr 2020 10:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uHUqHTfoGR+1ZkqPou9Z5HBhwxdn8u/b73K/p08vGt4=; b=hSOw4eWIOpcUGkqXNPQ9dtdmR/ipT2mxWsiI3pMLVH0qgRPNjHSMENCRjHZ4NJar66 7/UT+rWF0VVOTwd6Qq1Ko5BGuW+jxJLwAD3xaf5wYQROloGanuLGb7IPMnVJv7HK78+I 6ngZrW4BxXtksLmLmfoiWffLmjWadvjH0C7qLcRN+lO7dVaEm1JGvGctjkYtl4N944aK W3n+wj6bEmpU/CznO6UiFv/JDUuQBgZiEQyXklusRoJ6jwr/CH6Wy/hgVNh5izNnczuz 90748Du66RWc5+QokDus/aLjad+8nROeHW3JEc7vfg8mwm0Xu+810td/BaN7JKL0CQ6H pY3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uHUqHTfoGR+1ZkqPou9Z5HBhwxdn8u/b73K/p08vGt4=; b=EeIohj8QCv2zIfiQFgo5xCsya3cCxwnlsrE+tFYGkMC2yDVKTuvpQHkf7xo2hEghNv drxJclQjFO0bxacMT6sLfoZPK+S0Sq+k8zX7uwgpjelUKBkM+NGtH2yiaWUCH89enNxD TKKvCOwR+0IOfkXFp9Gy80+pJtX8pfIcX7V1Mws3jPIcOmWu+G1ogh72VJB5HvBIur7M fzFquOzoL/gmBWQebr+o24jc3ufQoxDtl4oOCTCT89KPRONI7L5l7lhgld3ZsMWVFD9i GklW77VwwZKnlYEKBcJiK1h87zgHLLeopWdULRgfwIc17VDZ/0Pa8wNgu7OVxL2km+6P YKFg== X-Gm-Message-State: AGi0PuZTsKz5CuwBtXewPns6IzH4FiDywISVHoQ6Fa+1IJmOvJxBCXjj VqQqXDZioI10l8/qtKrgZV/i9A== X-Google-Smtp-Source: APiQypKS0JuC3gKtoh+XCp+shpu7gwSGraLZSLm/eXL7WMGLNhlt6UWbZ/JnbP6LfEz+El1lwsxHkg== X-Received: by 2002:a1c:2506:: with SMTP id l6mr784909wml.44.1586883871278; Tue, 14 Apr 2020 10:04:31 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:30 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 10/25] iommu/arm-smmu-v3: Manage ASIDs with xarray Date: Tue, 14 Apr 2020 19:02:38 +0200 Message-Id: <20200414170252.714402-11-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In preparation for sharing some ASIDs with the CPU, use a global xarray to store ASIDs and their context. ASID#0 is now reserved, and the ASID space is global. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 60a415e8e2b6f..96ee60002e85e 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -664,7 +664,6 @@ struct arm_smmu_device { #define ARM_SMMU_MAX_ASIDS (1 << 16) unsigned int asid_bits; - DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS); #define ARM_SMMU_MAX_VMIDS (1 << 16) unsigned int vmid_bits; @@ -724,6 +723,8 @@ struct arm_smmu_option_prop { const char *prop; }; +static DEFINE_XARRAY_ALLOC1(asid_xa); + static struct arm_smmu_option_prop arm_smmu_options[] = { { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, @@ -1763,6 +1764,14 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain) cdcfg->cdtab = NULL; } +static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) +{ + if (!cd->asid) + return; + + xa_erase(&asid_xa, cd->asid); +} + /* Stream table manipulation functions */ static void arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) @@ -2448,10 +2457,9 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; - if (cfg->cdcfg.cdtab) { + if (cfg->cdcfg.cdtab) arm_smmu_free_cd_tables(smmu_domain); - arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid); - } + arm_smmu_free_asid(&cfg->cd); } else { struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; if (cfg->vmid) @@ -2466,14 +2474,15 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, struct io_pgtable_cfg *pgtbl_cfg) { int ret; - int asid; + u32 asid; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr; - asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits); - if (asid < 0) - return asid; + ret = xa_alloc(&asid_xa, &asid, &cfg->cd, + XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL); + if (ret) + return ret; cfg->s1cdmax = master->ssid_bits; @@ -2506,7 +2515,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, out_free_cd_tables: arm_smmu_free_cd_tables(smmu_domain); out_free_asid: - arm_smmu_bitmap_free(smmu->asid_map, asid); + arm_smmu_free_asid(&cfg->cd); return ret; } From patchwork Tue Apr 14 17:02:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25FEAC54F70 for ; Tue, 14 Apr 2020 17:04:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 063DE20678 for ; Tue, 14 Apr 2020 17:04:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Z0Yp2WFw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391395AbgDNREh (ORCPT ); Tue, 14 Apr 2020 13:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391390AbgDNREe (ORCPT ); Tue, 14 Apr 2020 13:04:34 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F519C061A0F for ; Tue, 14 Apr 2020 10:04:34 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id r26so14958650wmh.0 for ; Tue, 14 Apr 2020 10:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aOn2/ivBf+XmB6Al4WmdsHcps0QyIiLuo8qefeV2C0s=; b=Z0Yp2WFwHT70u0R7Wr7R5AhtdbVQPNTMNufdFuXd+ENpRenI7zcZxRtj1eDH/8DVxK w7N9nOpW7sEG9K/u3l1KC6rScagAtd4Om9U5KNXAa5xEBG0aKQFAomBQj49jMwAx80bm VnyWFWc4Yzp3RbBcbzqkmhqjzSXUGlBrfV9Kgixw0dzjGUnOx/ApPNVuQ8gPPBhxSY5/ 4wGrHj0nf+rS03Agdvzb84edls0jQzbEhQ/0sOzl94i50gjL44IGJs4I98eE/dCmXuXl D4QQcRCEFAxwwKcohtYfh0JlVaKZdefOyxLVJTOa7V2u0lkZCvT24PKYz706K9mF45p0 UpIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aOn2/ivBf+XmB6Al4WmdsHcps0QyIiLuo8qefeV2C0s=; b=VOWmp5mCffOWbm58ori36BUvr7ezL+kk0fyB7YZAcw2ENd5VqUnE62Qq3AHSAfM9Eh 7hQ8RTitXKToZomor+PWi6qonhI+U/Uy7Fcn1z6fguu44OGAgnczFKiuxIue0/6XtUM1 AXp7VA4pTcbRoWzV03O6oyu+6IkQZ9/8g6/knteies+gjY2qSNLsnDXCTAObhH2uDFue WcmXf/K+fFtt8FwAuFRjPn4DHEwN2YHDHHx6HmSsaprlkooQ0btdde2oiBGzM042Nhj9 nrRj64FWserk+8kjNYJGYo42q8AC+P3419nloXa/lpt77piiyAQLwR9JscYUzyBsfkH3 ohvg== X-Gm-Message-State: AGi0PuYtj8JU6WnlCCdcHAnelE4BzxNbwzKt1KLkyTEj3y36kHZoQ4GC cH/CEyWHmk5QxegwON0E2SmV8A== X-Google-Smtp-Source: APiQypJHABOUo0l69p8cEPC/4JaEesvnVVdt7c8KNCFhOJxICd+3rtd3OwSved0AadUjHTnKij1dPw== X-Received: by 2002:a1c:e906:: with SMTP id q6mr776151wmc.62.1586883872984; Tue, 14 Apr 2020 10:04:32 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:32 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker , Suzuki K Poulose Subject: [PATCH v5 11/25] arm64: cpufeature: Export symbol read_sanitised_ftr_reg() Date: Tue, 14 Apr 2020 19:02:39 +0200 Message-Id: <20200414170252.714402-12-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to share CPU page tables with devices. Allow the driver to be built as module by exporting the read_sanitized_ftr_reg() cpufeature symbol. Cc: Suzuki K Poulose Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kernel/cpufeature.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9fac745aa7bb2..5f6adbf4ae893 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -841,6 +841,7 @@ u64 read_sanitised_ftr_reg(u32 id) BUG_ON(!regp); return regp->sys_val; } +EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg); #define read_sysreg_case(r) \ case r: return read_sysreg_s(r) From patchwork Tue Apr 14 17:02:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A9E8C54FCF for ; Tue, 14 Apr 2020 17:04:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73AD3206A2 for ; Tue, 14 Apr 2020 17:04:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ggXFs8Uq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391179AbgDNREn (ORCPT ); Tue, 14 Apr 2020 13:04:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391418AbgDNREj (ORCPT ); Tue, 14 Apr 2020 13:04:39 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94180C061A0F for ; Tue, 14 Apr 2020 10:04:39 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id y24so14905981wma.4 for ; Tue, 14 Apr 2020 10:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0hiETRXfeZLt0WY2JvQ+8wg567BFjejHnLqaaI4LKSM=; b=ggXFs8UqysltNuEcHyHwogoctE8XNDFDfAu1AwROQXAOMyMogcSE0j72TDoP1TCx/6 XlOWMCWDGs2EJmNoARvv9gY3JhDVUlHZ/HqkwOGzwgH33UvMVBa06XrUlaLgXGwJCymx ITUuuxeHnRndaKzy3EocRi5yX73vqtLaEgPI/+TNl8JqLo59tjesU7IB8k2do61bP42/ quM6cTlpZvSQWBud+xyZMfoer/1r/ScL8zsYDivwWoSN5g61lCoW6+OOfLW0E3ubCvvf YRhsFhoBX2K/RXr4fcyKbYIfRuwy+y0OgWmRjlMNS0/zZulCxy1mGcShYuzLlijBO545 FqoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0hiETRXfeZLt0WY2JvQ+8wg567BFjejHnLqaaI4LKSM=; b=pcJONEAm0Uu9LQ6VY6Ol6SkMcW4Jzq4gxqOH6y92ufXR60aricU0qoZlc7OBjfLiqC QGHFY+Q/65EhrZ7JdIRWl6XugjsHqF9JhJ4OcwA0XF6wGsUku9Ir0AwaSdjt9UO1EyYN InhteEn83lpJXsa/fX/vt0gxSxM+cbPvcYoqpR83GXx4GbyoUe83W9gxdM7XLOjPdS8m AyYVQgrIe9unHDfQPgeLK9jVjRe0ggHE5rmP66eVmUoXdmAYAVO8+tK0999d2l10asZv vOVx8EP+bZqz5L7eyoXEqGDduuGeA6TUme+mXg+KgH4jwZUO3mxwkM7IOKS+SE/88+hd 4/uQ== X-Gm-Message-State: AGi0PuaiWr2fa0X2yQX8KwbyidpBJoxOSHPVBjZbUdSulzrZgYBjqKOD cE1rbQmUal06/ifNV3mNObDWUw== X-Google-Smtp-Source: APiQypJeBEUtsFeDxdQMOdRxs85MGZqmg6nyXk4JwZKpcy//4Udg+l2ZYnvs4q2GACl0XhRwyq7rHw== X-Received: by 2002:a1c:cc0a:: with SMTP id h10mr729491wmb.127.1586883878270; Tue, 14 Apr 2020 10:04:38 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:37 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 14/25] iommu/arm-smmu-v3: Add support for VHE Date: Tue, 14 Apr 2020 19:02:42 +0200 Message-Id: <20200414170252.714402-15-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow to run a host kernel at EL2. When using normal DMA, Device and CPU address spaces are dissociated, and do not need to implement the same capabilities, so VHE hasn't been used in the SMMU until now. With shared address spaces however, ASIDs are shared between MMU and SMMU, and broadcast TLB invalidations issued by a CPU are taken into account by the SMMU. TLB entries on both sides need to have identical exception level in order to be cleared with a single invalidation. When the CPU is using VHE, enable VHE in the SMMU for all STEs. Normal DMA mappings will need to use TLBI_EL2 commands instead of TLBI_NH, but shouldn't be otherwise affected by this change. Signed-off-by: Jean-Philippe Brucker --- v4->v5: bump feature bit --- drivers/iommu/arm-smmu-v3.c | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 8fbc5da133ae4..21d458d817fc2 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -480,6 +481,8 @@ struct arm_smmu_cmdq_ent { #define CMDQ_OP_TLBI_NH_ASID 0x11 #define CMDQ_OP_TLBI_NH_VA 0x12 #define CMDQ_OP_TLBI_EL2_ALL 0x20 + #define CMDQ_OP_TLBI_EL2_ASID 0x21 + #define CMDQ_OP_TLBI_EL2_VA 0x22 #define CMDQ_OP_TLBI_S12_VMALL 0x28 #define CMDQ_OP_TLBI_S2_IPA 0x2a #define CMDQ_OP_TLBI_NSNH_ALL 0x30 @@ -651,6 +654,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) #define ARM_SMMU_FEAT_VAX (1 << 14) #define ARM_SMMU_FEAT_RANGE_INV (1 << 15) +#define ARM_SMMU_FEAT_E2H (1 << 16) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -924,6 +928,8 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num); cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale); cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + /* Fallthrough */ + case CMDQ_OP_TLBI_EL2_VA: cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid); cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf); cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl); @@ -945,6 +951,9 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) case CMDQ_OP_TLBI_S12_VMALL: cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); break; + case CMDQ_OP_TLBI_EL2_ASID: + cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid); + break; case CMDQ_OP_ATC_INV: cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid); cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global); @@ -1538,7 +1547,8 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) { struct arm_smmu_cmdq_ent cmd = { - .opcode = CMDQ_OP_TLBI_NH_ASID, + .opcode = smmu->features & ARM_SMMU_FEAT_E2H ? + CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID, .tlbi.asid = asid, }; @@ -2093,13 +2103,16 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, } if (s1_cfg) { + int strw = smmu->features & ARM_SMMU_FEAT_E2H ? + STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1; + BUG_ON(ste_live); dst[1] = cpu_to_le64( FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) | FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) | FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) | FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) | - FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1)); + FIELD_PREP(STRTAB_STE_1_STRW, strw)); if (smmu->features & ARM_SMMU_FEAT_STALLS && !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) @@ -2495,7 +2508,8 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, return; if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { - cmd.opcode = CMDQ_OP_TLBI_NH_VA; + cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ? + CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA; cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; } else { cmd.opcode = CMDQ_OP_TLBI_S2_IPA; @@ -3800,7 +3814,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) writel_relaxed(reg, smmu->base + ARM_SMMU_CR1); /* CR2 (random crap) */ - reg = CR2_PTM | CR2_RECINVSID | CR2_E2H; + reg = CR2_PTM | CR2_RECINVSID; + + if (smmu->features & ARM_SMMU_FEAT_E2H) + reg |= CR2_E2H; + writel_relaxed(reg, smmu->base + ARM_SMMU_CR2); /* Stream table */ @@ -3958,8 +3976,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_MSI) smmu->features |= ARM_SMMU_FEAT_MSI; - if (reg & IDR0_HYP) + if (reg & IDR0_HYP) { smmu->features |= ARM_SMMU_FEAT_HYP; + if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + smmu->features |= ARM_SMMU_FEAT_E2H; + } /* * The coherency feature as set by FW is used in preference to the ID From patchwork Tue Apr 14 17:02:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E497C55182 for ; Tue, 14 Apr 2020 17:04:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BF2F20678 for ; Tue, 14 Apr 2020 17:04:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="UGA3KLMw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391371AbgDNREs (ORCPT ); Tue, 14 Apr 2020 13:04:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391390AbgDNREm (ORCPT ); Tue, 14 Apr 2020 13:04:42 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CC1CC061A10 for ; Tue, 14 Apr 2020 10:04:42 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id e26so13825348wmk.5 for ; Tue, 14 Apr 2020 10:04:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kX7S6lLPtPxuthnaqmJScAqBjqH/apZcE9CXTwQeOvU=; b=UGA3KLMwsLNY89811V7EK3id91XIRFmER1PcDwpN6xrw1GtsOaQizxrRHRtGEE4TX3 do1n0AFiqJF6QCTHbjBBQVWUHDTYjRFS4XwOivdCaMdJVcZ+EpuQGdACDtk5jJ/ALrk9 QWkfPd8rxSd35yJXezznRHoO4af8sVb2YJHI6dIC9yyRjSbGXaAgyzJpjh5J52Jc+RGL hLOrTCp1qBgro6BphrKqLYQaG6dLMZ31qsFfaosesRKQkqeQP5unheE0t4aNORXh1gBn VVxXwA1j6xT4GQQP6HFNo1G8EqTDEoncXmNvd+5q+FMzHAwz4AFOArEfPYMYNEYf/6yH eumQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kX7S6lLPtPxuthnaqmJScAqBjqH/apZcE9CXTwQeOvU=; b=bpi2r/Fmk41CxjU7s6jFadj3baN+j7VRO61PrfWJJ7kTLEJmmTAUFn+Efl4ZvltHSW 59KOGs9iTkKOyhqcKMIdRgi3UGkxFZwng7VMXr2k2lGlnIJdTCsRyUiN74jNUZ40B06N HjcuVAyzwLt70Z9mJp5Oi3cO8NsoC3oQCiK77xp2q9hTl8o+CabuPuxCbvv8tmwhtAo6 Fo0e9NPtlRK7L3qy6c++C5XhfPQE3hM+Zn9FR7qhzlMwgf+k1I5Z+/RHyICiEi6NDCxv iSBKNn7WqmmBcw+nDOID8TfY9avNSoiT7lcUZ2+3REo9ryfTF1WrHT6dAZtOhdxmYZd3 gpRw== X-Gm-Message-State: AGi0PuZVBTF1jaxlrRm1z/vCZ6F4OCLUlJCvUMntDQqU4Ua9+zjNANyh sE2ga5qKzCJSNum9vyAMNUSISA== X-Google-Smtp-Source: APiQypL0sw4GI3orm6Mwz7O2HA+cidMWHFaRPNOxNBIJlp7Uv38brpt7BnSl5ju9KFU9AbCgedQXVw== X-Received: by 2002:a1c:8084:: with SMTP id b126mr752667wmd.135.1586883880974; Tue, 14 Apr 2020 10:04:40 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:40 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker , Suzuki K Poulose Subject: [PATCH v5 16/25] iommu/arm-smmu-v3: Add SVA feature checking Date: Tue, 14 Apr 2020 19:02:44 +0200 Message-Id: <20200414170252.714402-17-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Aggregate all sanity-checks for sharing CPU page tables with the SMMU under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to check FEAT_ATS and FEAT_PRI. For platform SVA, they will most likely have to check FEAT_STALLS. Cc: Suzuki K Poulose Signed-off-by: Jean-Philippe Brucker --- v4->v5: bump feature bit --- drivers/iommu/arm-smmu-v3.c | 72 +++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index e7de8a7459fa4..d209d85402a83 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -657,6 +657,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_RANGE_INV (1 << 15) #define ARM_SMMU_FEAT_E2H (1 << 16) #define ARM_SMMU_FEAT_BTM (1 << 17) +#define ARM_SMMU_FEAT_SVA (1 << 18) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -3930,6 +3931,74 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } +static bool arm_smmu_supports_sva(struct arm_smmu_device *smmu) +{ + unsigned long reg, fld; + unsigned long oas; + unsigned long asid_bits; + + u32 feat_mask = ARM_SMMU_FEAT_BTM | ARM_SMMU_FEAT_COHERENCY; + + if ((smmu->features & feat_mask) != feat_mask) + return false; + + if (!(smmu->pgsize_bitmap & PAGE_SIZE)) + return false; + + /* + * Get the smallest PA size of all CPUs (sanitized by cpufeature). We're + * not even pretending to support AArch32 here. + */ + reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); + switch (fld) { + case 0x0: + oas = 32; + break; + case 0x1: + oas = 36; + break; + case 0x2: + oas = 40; + break; + case 0x3: + oas = 42; + break; + case 0x4: + oas = 44; + break; + case 0x5: + oas = 48; + break; + case 0x6: + oas = 52; + break; + default: + return false; + } + + /* abort if MMU outputs addresses greater than what we support. */ + if (smmu->oas < oas) + return false; + + /* We can support bigger ASIDs than the CPU, but not smaller */ + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_ASID_SHIFT); + asid_bits = fld ? 16 : 8; + if (smmu->asid_bits < asid_bits) + return false; + + /* + * See max_pinned_asids in arch/arm64/mm/context.c. The following is + * generally the maximum number of bindable processes. + */ + if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) + asid_bits--; + dev_dbg(smmu->dev, "%d shared contexts\n", (1 << asid_bits) - + num_possible_cpus() - 2); + + return true; +} + static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; @@ -4142,6 +4211,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->ias = max(smmu->ias, smmu->oas); + if (arm_smmu_supports_sva(smmu)) + smmu->features |= ARM_SMMU_FEAT_SVA; + dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", smmu->ias, smmu->oas, smmu->features); return 0; From patchwork Tue Apr 14 17:02:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB2F0C2BB85 for ; Tue, 14 Apr 2020 17:05:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B489C2074D for ; Tue, 14 Apr 2020 17:05:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="tSYqciql" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391546AbgDNRFC (ORCPT ); Tue, 14 Apr 2020 13:05:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391541AbgDNREo (ORCPT ); Tue, 14 Apr 2020 13:04:44 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B15C6C061A0C for ; Tue, 14 Apr 2020 10:04:43 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id f13so15227816wrm.13 for ; Tue, 14 Apr 2020 10:04:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=o3CxBoBUQYvmGdKZIOcmkRHgIzTXyZyZ4t2wW/1SA7g=; b=tSYqciqlXtpjJ7vaQSTxjL5MZonaq8ohjbCSix8+DjN0AmayPZAIpS7LY/zTiW9lAV POiAPMbwtCGrqliYceZC5WVj5SXlQqBR90xihQd2WhNlVOzwxkZumgzVIHC8uJ1+X9Ld NY+/1Ctz+wLSZedF+dev8JUV1ocwF1jxxUy+lZv/DbEkXH3kKqGVr6BD+Eki/pfZ/emV B9dC1Qd8PVYzfTTA5YYtlwx2mJZSYHYzkmcH1Br3A9STc52frl2YwBhfk2usMwVWSQHq CdylWdrs5XXvI8yseewVv4rOEzHYZhFARjPFhBzziAmN5msb5YOwMdQkgDktUijog+71 /KaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o3CxBoBUQYvmGdKZIOcmkRHgIzTXyZyZ4t2wW/1SA7g=; b=oNxfTK3GXkR4J1DkpkqhRYiIwdAU5w8frq+vV7P62tvZ6XjbivafV2C3Y5lDWHR9DE npLrAPKPqHRc8PsLlHwg9GXnR3YoZAREVMLr9jIloa9evjqPOJHHQHFpNqhPR0xgXo9L zZzgLRCu/sN1aV36UaHwrem47xfFPKbqi6mI+a5ZI2opDW1REaz1tt1s0HYGQoDICKK4 ucDSxZMH4Ym2Dld+uEjzhcLlRAY6eV6qW9Hm6JmatF4/onPIbdDyZiHPFhHOvv5vt0V+ VZ5XRCxr5rA+ATqyLOGUX3gxWlWKOfi6lr//eZAZVLYI89CJw+TgfqD46DnYlxaGHY3e k9Kw== X-Gm-Message-State: AGi0PuZ3rKpJ0OcPTnBYDbbVJOdjFFbYyt17FSkMiBlYYckM+teG/Ox1 Ks9rfG5JK4SDW61pO1jYxac1+A== X-Google-Smtp-Source: APiQypIIDp3Aj7gJSpVJ/LpltmID+bFQ6t9Z5J6KgWOyiu1YsJcaTx88+obo6kMWqL+PJTWdhscFwQ== X-Received: by 2002:a5d:610e:: with SMTP id v14mr15760928wrt.159.1586883882233; Tue, 14 Apr 2020 10:04:42 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:41 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 17/25] iommu/arm-smmu-v3: Implement mm operations Date: Tue, 14 Apr 2020 19:02:45 +0200 Message-Id: <20200414170252.714402-18-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Hook SVA operations to support sharing page tables with the SMMUv3: * dev_enable/disable/has_feature for device drivers to modify the SVA state. * sva_bind/unbind and sva_get_pasid to bind device and address spaces. * The mm_attach/detach/clear/invalidate/free callbacks from iommu-sva The clear() operation has to detach the page tables while DMA may still be running (because the process died). To avoid any event 0x0a print (C_BAD_CD) we disable translation and error reporting, without clearing CD.V. PCIe Translation Requests and Page Requests are silently denied. The detach() operation always happens whether or not clear() was invoked, and properly disables the CD. Signed-off-by: Jean-Philippe Brucker --- v4->v5: Add clear() operation --- drivers/iommu/Kconfig | 1 + drivers/iommu/arm-smmu-v3.c | 214 +++++++++++++++++++++++++++++++++++- 2 files changed, 211 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 411a7ee2ab12d..8118f090a51b3 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -435,6 +435,7 @@ config ARM_SMMU_V3 tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support" depends on ARM64 select IOMMU_API + select IOMMU_SVA select IOMMU_IO_PGTABLE_LPAE select GENERIC_MSI_IRQ_DOMAIN help diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index d209d85402a83..6640c2ac2a7c5 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -36,6 +36,7 @@ #include #include "io-pgtable-arm.h" +#include "iommu-sva.h" /* MMIO registers */ #define ARM_SMMU_IDR0 0x0 @@ -739,6 +740,13 @@ struct arm_smmu_option_prop { static DEFINE_XARRAY_ALLOC1(asid_xa); static DEFINE_SPINLOCK(contexts_lock); +/* + * When a process dies, DMA is still running but we need to clear the pgd. If we + * simply cleared the valid bit from the context descriptor, we'd get event 0x0a + * which are not recoverable. + */ +static struct arm_smmu_ctx_desc invalid_cd = { 0 }; + static struct arm_smmu_option_prop arm_smmu_options[] = { { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, @@ -1649,7 +1657,9 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, * (2) Install a secondary CD, for SID+SSID traffic. * (3) Update ASID of a CD. Atomically write the first 64 bits of the * CD, then invalidate the old entry and mappings. - * (4) Remove a secondary CD. + * (4) Quiesce the context without clearing the valid bit. Disable + * translation, and ignore any translation fault. + * (5) Remove a secondary CD. */ u64 val; bool cd_live; @@ -1666,8 +1676,11 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, val = le64_to_cpu(cdptr[0]); cd_live = !!(val & CTXDESC_CD_0_V); - if (!cd) { /* (4) */ + if (!cd) { /* (5) */ val = 0; + } else if (cd == &invalid_cd) { /* (4) */ + val &= ~(CTXDESC_CD_0_S | CTXDESC_CD_0_R); + val |= CTXDESC_CD_0_TCR_EPD0; } else if (cd_live) { /* (3) */ val &= ~CTXDESC_CD_0_ASID; val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid); @@ -1884,7 +1897,6 @@ static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) return NULL; } -__maybe_unused static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) { u16 asid; @@ -1978,7 +1990,6 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) return ERR_PTR(ret); } -__maybe_unused static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd) { if (arm_smmu_free_asid(cd)) { @@ -3008,6 +3019,16 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) master = dev_iommu_priv_get(dev); smmu = master->smmu; + /* + * Checking that SVA is disabled ensures that this device isn't bound to + * any mm, and can be safely detached from its old domain. Bonds cannot + * be removed concurrently since we're holding the group mutex. + */ + if (iommu_sva_enabled(dev)) { + dev_err(dev, "cannot attach - SVA enabled\n"); + return -EBUSY; + } + arm_smmu_detach_dev(master); mutex_lock(&smmu_domain->init_mutex); @@ -3107,6 +3128,99 @@ arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) return ops->iova_to_phys(ops, iova); } +static void arm_smmu_mm_invalidate(struct device *dev, int pasid, void *entry, + unsigned long iova, size_t size) +{ + /* TODO: Invalidate ATC */ +} + +static int arm_smmu_mm_attach(struct device *dev, int pasid, void *entry, + bool attach_domain) +{ + struct arm_smmu_ctx_desc *cd = entry; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + /* + * If another device in the domain has already been attached, the + * context descriptor is already valid. + */ + if (!attach_domain) + return 0; + + return arm_smmu_write_ctx_desc(smmu_domain, pasid, cd); +} + +static void arm_smmu_mm_clear(struct device *dev, int pasid, void *entry) +{ + struct arm_smmu_ctx_desc *cd = entry; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + /* + * DMA may still be running. Keep the cd valid but disable + * translation, so that new events will still result in stall. + */ + arm_smmu_write_ctx_desc(smmu_domain, pasid, &invalid_cd); + + /* + * The ASID allocator won't broadcast the final TLB invalidations + * for this ASID, so we need to do it manually. + */ + arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); + + /* TODO: invalidate ATC */ +} + +static void arm_smmu_mm_detach(struct device *dev, int pasid, void *entry, + bool detach_domain, bool cleared) +{ + struct arm_smmu_ctx_desc *cd = entry; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + if (detach_domain) { + arm_smmu_write_ctx_desc(smmu_domain, pasid, NULL); + + if (!cleared) + /* See comment in arm_smmu_mm_clear() */ + arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); + } + + /* TODO: invalidate ATC */ +} + +static void *arm_smmu_mm_alloc(struct mm_struct *mm) +{ + return arm_smmu_alloc_shared_cd(mm); +} + +static void arm_smmu_mm_free(void *entry) +{ + arm_smmu_free_shared_cd(entry); +} + +static struct io_mm_ops arm_smmu_mm_ops = { + .alloc = arm_smmu_mm_alloc, + .invalidate = arm_smmu_mm_invalidate, + .attach = arm_smmu_mm_attach, + .clear = arm_smmu_mm_clear, + .detach = arm_smmu_mm_detach, + .free = arm_smmu_mm_free, +}; + +static struct iommu_sva * +arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void *drvdata) +{ + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1) + return ERR_PTR(-EINVAL); + + return iommu_sva_bind_generic(dev, mm, &arm_smmu_mm_ops, drvdata); +} + static struct platform_driver arm_smmu_driver; static @@ -3225,6 +3339,7 @@ static void arm_smmu_remove_device(struct device *dev) master = dev_iommu_priv_get(dev); smmu = master->smmu; + WARN_ON(iommu_sva_disable(dev)); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); iommu_device_unlink(&smmu->iommu, dev); @@ -3344,6 +3459,90 @@ static void arm_smmu_get_resv_regions(struct device *dev, iommu_dma_get_resv_regions(dev, head); } +static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) +{ + return false; +} + +static bool arm_smmu_dev_has_feature(struct device *dev, + enum iommu_dev_features feat) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + if (!master) + return false; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + if (!(master->smmu->features & ARM_SMMU_FEAT_SVA)) + return false; + + /* SSID and IOPF support are mandatory for the moment */ + return master->ssid_bits && arm_smmu_iopf_supported(master); + default: + return false; + } +} + +static bool arm_smmu_dev_feature_enabled(struct device *dev, + enum iommu_dev_features feat) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + if (!master) + return false; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return iommu_sva_enabled(dev); + default: + return false; + } +} + +static int arm_smmu_dev_enable_sva(struct device *dev) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + struct iommu_sva_param param = { + .min_pasid = 1, + .max_pasid = 0xfffffU, + }; + + param.max_pasid = min(param.max_pasid, (1U << master->ssid_bits) - 1); + return iommu_sva_enable(dev, ¶m); +} + +static int arm_smmu_dev_enable_feature(struct device *dev, + enum iommu_dev_features feat) +{ + if (!arm_smmu_dev_has_feature(dev, feat)) + return -ENODEV; + + if (arm_smmu_dev_feature_enabled(dev, feat)) + return -EBUSY; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return arm_smmu_dev_enable_sva(dev); + default: + return -EINVAL; + } +} + +static int arm_smmu_dev_disable_feature(struct device *dev, + enum iommu_dev_features feat) +{ + if (!arm_smmu_dev_feature_enabled(dev, feat)) + return -EINVAL; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return iommu_sva_disable(dev); + default: + return -EINVAL; + } +} + static struct iommu_ops arm_smmu_ops = { .capable = arm_smmu_capable, .domain_alloc = arm_smmu_domain_alloc, @@ -3362,6 +3561,13 @@ static struct iommu_ops arm_smmu_ops = { .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, + .dev_has_feat = arm_smmu_dev_has_feature, + .dev_feat_enabled = arm_smmu_dev_feature_enabled, + .dev_enable_feat = arm_smmu_dev_enable_feature, + .dev_disable_feat = arm_smmu_dev_disable_feature, + .sva_bind = arm_smmu_sva_bind, + .sva_unbind = iommu_sva_unbind_generic, + .sva_get_pasid = iommu_sva_get_pasid_generic, .pgsize_bitmap = -1UL, /* Restricted during device attach */ }; From patchwork Tue Apr 14 17:02:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DFA8C54FD0 for ; Tue, 14 Apr 2020 17:04:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19FB62074D for ; Tue, 14 Apr 2020 17:04:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="JI5QkA97" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391557AbgDNREu (ORCPT ); Tue, 14 Apr 2020 13:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391450AbgDNREp (ORCPT ); Tue, 14 Apr 2020 13:04:45 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02558C061A0F for ; Tue, 14 Apr 2020 10:04:45 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id d27so6044228wra.1 for ; Tue, 14 Apr 2020 10:04:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iFQo4eLK0C51ieV33vlbkwWvrpzk0Iu0nUGg4g65CHs=; b=JI5QkA97eezOSapi2fPqRY+esAG4E9fuADRTO9KkwAXqshRxQg2LKwGOVpQ8bEJ81f 7XPxMj7271i5R78HnNq28pZSLk3B7jV9yN0c4/bTfJRpnWpYm9Lt2jtqPzuqS2HV02Z2 hkq9kvDb8FIDY1FaOeaNgULZXDgiAcMd9YqMGytbbSvWq4fOyoPZG5zIKdYtbwIwjSXa /p9Mczkd/c2xupqo0EB5V5OSAvg3O/E1/F5JPD2SgV6IMuTc6+yiI7lkQVerTEvPWxXm XspXwZKvxZ42rz0b7zJEQWuYEyF6WUnRsl/SYo5BgSVSgOvKFQDMOtkvWsn7y2Hxn6V5 9ulA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iFQo4eLK0C51ieV33vlbkwWvrpzk0Iu0nUGg4g65CHs=; b=S81IIEDN5jSaeiSg9ajK7vuIBdC+R89z1wSfjRxZ+D1ZEfy00uxIE6w4w1WCKv735V T5J1s8crBVYVA2n0uPq0Gbj+FPkdzZNZAIVkDcu+126p/PihZ88flWxdmYEqQFB1fbtU 5ASIFHz1wfTqQ886WbO0o9+Mag8KlX96szvJ44ikxeUYXSbh3oFlCLkcZiNC0NOeZYGM FkCyGz1eB5NXyqgXfcXO7/sgoVd83xXJcifA7qfssZkzxXmoiYpJVSpFR5Yr/OpH3nki GcButU1X3oUXQMNpY14obyCmBlTXgFJGd8q7obR+3zzCMM6itS0msNEHIkQ6vlp2FqoN JOgQ== X-Gm-Message-State: AGi0PuauJu1Hw3UlB+niQ6OyLTOfk7zJbsbEgWv9dpVWVHaHjVjWUndy HUb8DRB+BWB714B9h9dQDyriEA== X-Google-Smtp-Source: APiQypLZa+lawPw64nTDrcMPpw0HQPprW6BlaWllx/hPptH94lEGFoejVXa1lUE2ugeNThH+6x6yTQ== X-Received: by 2002:a5d:4085:: with SMTP id o5mr23364377wrp.327.1586883883718; Tue, 14 Apr 2020 10:04:43 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:43 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 18/25] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops Date: Tue, 14 Apr 2020 19:02:46 +0200 Message-Id: <20200414170252.714402-19-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org iommu-sva calls us when an mm is modified. Perform the required ATC invalidations. Signed-off-by: Jean-Philippe Brucker --- v4->v5: more comments --- drivers/iommu/arm-smmu-v3.c | 70 ++++++++++++++++++++++++++++++------- 1 file changed, 58 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 6640c2ac2a7c5..c4bffb14461aa 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -2375,6 +2375,20 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size, size_t inval_grain_shift = 12; unsigned long page_start, page_end; + /* + * ATS and PASID: + * + * If substream_valid is clear, the PCIe TLP is sent without a PASID + * prefix. In that case all ATC entries within the address range are + * invalidated, including those that were requested with a PASID! There + * is no way to invalidate only entries without PASID. + * + * When using STRTAB_STE_1_S1DSS_SSID0 (reserving CD 0 for non-PASID + * traffic), translation requests without PASID create ATC entries + * without PASID, which must be invalidated with substream_valid clear. + * This has the unpleasant side-effect of invalidating all PASID-tagged + * ATC entries within the address range. + */ *cmd = (struct arm_smmu_cmdq_ent) { .opcode = CMDQ_OP_ATC_INV, .substream_valid = !!ssid, @@ -2418,12 +2432,12 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size, cmd->atc.size = log2_span; } -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master) +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master, int ssid) { int i; struct arm_smmu_cmdq_ent cmd; - arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd); + arm_smmu_atc_inv_to_cmd(ssid, 0, 0, &cmd); for (i = 0; i < master->num_sids; i++) { cmd.atc.sid = master->sids[i]; @@ -2934,7 +2948,7 @@ static void arm_smmu_disable_ats(struct arm_smmu_master *master) * ATC invalidation via the SMMU. */ wmb(); - arm_smmu_atc_inv_master(master); + arm_smmu_atc_inv_master(master, 0); atomic_dec(&smmu_domain->nr_ats_masters); } @@ -3131,7 +3145,22 @@ arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) static void arm_smmu_mm_invalidate(struct device *dev, int pasid, void *entry, unsigned long iova, size_t size) { - /* TODO: Invalidate ATC */ + int i; + struct arm_smmu_cmdq_ent cmd; + struct arm_smmu_cmdq_batch cmds = {}; + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + if (!master->ats_enabled) + return; + + arm_smmu_atc_inv_to_cmd(pasid, iova, size, &cmd); + + for (i = 0; i < master->num_sids; i++) { + cmd.atc.sid = master->sids[i]; + arm_smmu_cmdq_batch_add(master->smmu, &cmds, &cmd); + } + + arm_smmu_cmdq_batch_submit(master->smmu, &cmds); } static int arm_smmu_mm_attach(struct device *dev, int pasid, void *entry, @@ -3168,26 +3197,43 @@ static void arm_smmu_mm_clear(struct device *dev, int pasid, void *entry) * for this ASID, so we need to do it manually. */ arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); - - /* TODO: invalidate ATC */ + arm_smmu_atc_inv_domain(smmu_domain, pasid, 0, 0); } static void arm_smmu_mm_detach(struct device *dev, int pasid, void *entry, bool detach_domain, bool cleared) { struct arm_smmu_ctx_desc *cd = entry; + struct arm_smmu_master *master = dev_iommu_priv_get(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - if (detach_domain) { + if (detach_domain) arm_smmu_write_ctx_desc(smmu_domain, pasid, NULL); - if (!cleared) - /* See comment in arm_smmu_mm_clear() */ - arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); - } + /* + * If we went through clear(), we've already invalidated, and no new TLB + * entry can have been formed. + */ + if (cleared) + return; + + if (detach_domain) { + /* See comment in arm_smmu_mm_clear() */ + arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); + arm_smmu_atc_inv_domain(smmu_domain, pasid, 0, 0); - /* TODO: invalidate ATC */ + } else if (master->ats_enabled) { + /* + * There are more devices bound with this PASID in this domain, + * so we cannot yet clear the PASID entry, and this device could + * create new ATC entries. Invalidate the ATC for the sake of + * it. On unbinding the last device we'll properly invalidate + * all ATCs in the domain. Alternatively, an early detach_dev() + * on this device will also flush the ATC. + */ + arm_smmu_atc_inv_master(master, pasid); + } } static void *arm_smmu_mm_alloc(struct mm_struct *mm) From patchwork Tue Apr 14 17:02:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04DE5C54F70 for ; Tue, 14 Apr 2020 17:04:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D8C7020678 for ; Tue, 14 Apr 2020 17:04:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="gaVetWIT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391591AbgDNREu (ORCPT ); Tue, 14 Apr 2020 13:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391557AbgDNREq (ORCPT ); Tue, 14 Apr 2020 13:04:46 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E2DFC0610D5 for ; Tue, 14 Apr 2020 10:04:46 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id d27so6044379wra.1 for ; Tue, 14 Apr 2020 10:04:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D0bn8eYz56VMgxOp5ayoExfg5jGvgwB4FphM9uY9cuk=; b=gaVetWITwcst6bu/nPvp9HMtqfUjFV8jNuR5DTxjtDB5fKnw4UC1WpM0ialzVE04YJ aUTCOZCoH6xMH/pcYvzE14g/LWvFJeIGDMM4bbM2Xv23UMZAqzbaBOf3DW7r7emWqfWu P74iWnOqiylnkhEnvcb/QDYz8b2LjhHXJNQ9vXxtONtwUVwG7wbLCDIQYJkvfUHwMNFM 585ermQHWlAsr56de4uYSqQOLq+Ln0ys9qZJE/c5E2jIipl3ODDZgZVH0KJivAAXt/6Z VSLToAvZ0G8jFDFeJynK4H7UNh5u8xJq73tAg5ay36N7Z15IH/HuICTVJh+B2QQCljjk /pXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D0bn8eYz56VMgxOp5ayoExfg5jGvgwB4FphM9uY9cuk=; b=lBLr5pTbUd7C2CXciGJBfcRHCWD4DCFBG5rsGmtG73rAnS3Gfs64on+uFOuq81Uuk/ Ve5Ii6ngNjDkycxIHKm+/31a1p0gkWtQZXPSUFtcS/jDu9PsFPigpgM3RqRCpNPOf/FV aSqGil/MW2Tq2fE+iOfoSkfP/jPM6Dsc5Ghx6rAkhR2zWtYPwnH3J7M7YYm+Gt8DC4zw oZy2VJx09Rg5lOEyzWk2wZERgM/5c+mT5X1hP4caEd3lKn62WgycFjGxGk+aEeVd86Ad RRwd5PetdZSN8MDZAXfUcdk6Ti6eDunlqsXo2Rs+v36v+Y8r8yXvM0HGpGaDTW303vLs B2vw== X-Gm-Message-State: AGi0PubgqeJSEtACObOHuUVQ106dIWfByPfNh7U7NsFXvDSd70uUDt4W Gx+gSMbtMvR790aBxQ4b0I47xQ== X-Google-Smtp-Source: APiQypIcTdHMtDvgZ2K33+lxk/gMLeR1nAQxEQBMjbieoGuouiMjyc3vdc5F4dRl+5YUbbh9Wf+MWQ== X-Received: by 2002:adf:9788:: with SMTP id s8mr1224774wrb.84.1586883885010; Tue, 14 Apr 2020 10:04:45 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:44 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 19/25] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update Date: Tue, 14 Apr 2020 19:02:47 +0200 Message-Id: <20200414170252.714402-20-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org If the SMMU supports it and the kernel was built with HTTU support, enable hardware update of access and dirty flags. This is essential for shared page tables, to reduce the number of access faults on the fault queue. We can enable HTTU even if CPUs don't support it, because the kernel always checks for HW dirty bit and updates the PTE flags atomically. Signed-off-by: Jean-Philippe Brucker --- v4->v5: bump feature bits --- drivers/iommu/arm-smmu-v3.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index c4bffb14461aa..4ed9df15581af 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -57,6 +57,8 @@ #define IDR0_ASID16 (1 << 12) #define IDR0_ATS (1 << 10) #define IDR0_HYP (1 << 9) +#define IDR0_HD (1 << 7) +#define IDR0_HA (1 << 6) #define IDR0_BTM (1 << 5) #define IDR0_COHACC (1 << 4) #define IDR0_TTF GENMASK(3, 2) @@ -308,6 +310,9 @@ #define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) #define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) +#define CTXDESC_CD_0_TCR_HA (1UL << 43) +#define CTXDESC_CD_0_TCR_HD (1UL << 42) + #define CTXDESC_CD_0_AA64 (1UL << 41) #define CTXDESC_CD_0_S (1UL << 44) #define CTXDESC_CD_0_R (1UL << 45) @@ -659,6 +664,8 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_E2H (1 << 16) #define ARM_SMMU_FEAT_BTM (1 << 17) #define ARM_SMMU_FEAT_SVA (1 << 18) +#define ARM_SMMU_FEAT_HA (1 << 19) +#define ARM_SMMU_FEAT_HD (1 << 20) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -1689,10 +1696,17 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, * this substream's traffic */ } else { /* (1) and (2) */ + u64 tcr = cd->tcr; + cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK); cdptr[2] = 0; cdptr[3] = cpu_to_le64(cd->mair); + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + tcr &= ~CTXDESC_CD_0_TCR_HD; + if (!(smmu->features & ARM_SMMU_FEAT_HA)) + tcr &= ~CTXDESC_CD_0_TCR_HA; + /* * STE is live, and the SMMU might read dwords of this CD in any * order. Ensure that it observes valid values before reading @@ -1700,7 +1714,7 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, */ arm_smmu_sync_cd(smmu_domain, ssid, true); - val = cd->tcr | + val = tcr | #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | #endif @@ -1943,10 +1957,12 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) return old_cd; } + /* HA and HD will be filtered out later if not supported by the SMMU */ tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) | FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) | + CTXDESC_CD_0_TCR_HA | CTXDESC_CD_0_TCR_HD | CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; switch (PAGE_SIZE) { @@ -4309,6 +4325,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->features |= ARM_SMMU_FEAT_E2H; } + if (reg & (IDR0_HA | IDR0_HD)) { + smmu->features |= ARM_SMMU_FEAT_HA; + if (reg & IDR0_HD) + smmu->features |= ARM_SMMU_FEAT_HD; + } + /* * If the CPU is using VHE, but the SMMU doesn't support it, the SMMU * will create TLB entries for NH-EL1 world and will miss the From patchwork Tue Apr 14 17:02:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73449C54FC9 for ; Tue, 14 Apr 2020 17:05:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51EDD20678 for ; Tue, 14 Apr 2020 17:05:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="KSZ3slzL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391569AbgDNRE6 (ORCPT ); Tue, 14 Apr 2020 13:04:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391592AbgDNREv (ORCPT ); Tue, 14 Apr 2020 13:04:51 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DFE6C061A0E for ; Tue, 14 Apr 2020 10:04:51 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id i10so15238219wrv.10 for ; Tue, 14 Apr 2020 10:04:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dxDAAgL2UYsehlg428ECGLr43fDkT+szEz8iWolntSk=; b=KSZ3slzLde6NozRQYn2Ai7oqpucgG9X78qXMfbMZBdcER7nRY1kUNT7Qt/IDN4MDjl vb8Pwa0ulpEZSn0BBnH4dvqEPTE7RcoGq9Z0W234cFc+SVbRftS1kIvCyI853alAlB+Y rEPDG1qUaGZ8aDnk0qSGGOfVXMDaGzZE9CbxCxqT/oC+CO1OIwkFRvN+6kDc104C4vkE 8G9SDBWgw+ffbm6VeWoWzvyoCQ+PUbDkthqAws5ZdzYv2p4NNty/Nv1AQXZJv7O+3rA/ yHUo655cjGXVw0ER+P3ankuAQ6Jt9og6eLbHOvkwVehp8uN30C+g/z+zrhkAnk1Cf6Kl OqWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dxDAAgL2UYsehlg428ECGLr43fDkT+szEz8iWolntSk=; b=QtPbc3ncBgPfKnxh23e46he1tPwQzHkfK7CYqofe/ZvRR6IxN+yxX21cTWYyPkfFjm JAaEu2CWvalZ7xT8PHg63KL2/rMZMLNY6DQykZftL31F21nKNQKuiQ9lpplTSPSpLbUe bqd7ccjRoOaCpp6dsdMVVw9gtEpLzGs9055LZJos6cJIzMyLGt1XwFv/auZ8FAjREkBR SD+YijnT/o/Ul9hKp7CHF2e3uya5Kp8yVY3gQjmZBGZZxVOQdGECzDtF9P2KD+HxhBHP z7uFR88BJAWgu/hDvzN1ft/ycAZVY/zLZCrtChQFzzy/OQOz2VEWbbmP9Lx//8dcsdAe yEag== X-Gm-Message-State: AGi0PubhgGNsEVFKPOaoPJh29srWTnsSDK7hNVQEvYCREAOZetl+Z+s7 TOfWWA0H2n2M7SOdVPLQoPhbWg== X-Google-Smtp-Source: APiQypLvU1DM9P6089K9k3mSfMy8oJyq3WdIaLTthIKtphu5RdduiyeU/R3KBAD03tQZ8c5tQnffKg== X-Received: by 2002:a5d:4447:: with SMTP id x7mr2258833wrr.299.1586883890363; Tue, 14 Apr 2020 10:04:50 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:49 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker , Bjorn Helgaas Subject: [PATCH v5 23/25] PCI/ATS: Add PRI stubs Date: Tue, 14 Apr 2020 19:02:51 +0200 Message-Id: <20200414170252.714402-24-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The SMMUv3 driver, which can be built without CONFIG_PCI, will soon gain support for PRI. Partially revert commit c6e9aefbf9db ("PCI/ATS: Remove unused PRI and PASID stubs") to re-introduce the PRI stubs, and avoid adding more #ifdefs to the SMMU driver. Acked-by: Bjorn Helgaas Signed-off-by: Jean-Philippe Brucker --- include/linux/pci-ats.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h index f75c307f346de..e9e266df9b37c 100644 --- a/include/linux/pci-ats.h +++ b/include/linux/pci-ats.h @@ -28,6 +28,14 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs); void pci_disable_pri(struct pci_dev *pdev); int pci_reset_pri(struct pci_dev *pdev); int pci_prg_resp_pasid_required(struct pci_dev *pdev); +#else /* CONFIG_PCI_PRI */ +static inline int pci_enable_pri(struct pci_dev *pdev, u32 reqs) +{ return -ENODEV; } +static inline void pci_disable_pri(struct pci_dev *pdev) { } +static inline int pci_reset_pri(struct pci_dev *pdev) +{ return -ENODEV; } +static inline int pci_prg_resp_pasid_required(struct pci_dev *pdev) +{ return 0; } #endif /* CONFIG_PCI_PRI */ #ifdef CONFIG_PCI_PASID From patchwork Tue Apr 14 17:02:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D767C54FCF for ; Tue, 14 Apr 2020 17:04:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6284720678 for ; Tue, 14 Apr 2020 17:04:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="JkRjzmM2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391612AbgDNRE6 (ORCPT ); Tue, 14 Apr 2020 13:04:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391625AbgDNRE4 (ORCPT ); Tue, 14 Apr 2020 13:04:56 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC060C061A41 for ; Tue, 14 Apr 2020 10:04:54 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id c195so12967709wme.1 for ; Tue, 14 Apr 2020 10:04:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u0HAzwSaZgER/8QZ83ZQZ3tP3MIVx8mETnUVm6/V0ok=; b=JkRjzmM2nMv/rv2CSYLN179blYrWt4qE24gHB6fvaK7iiCOzcyiruWS8RU9vxYUOUd 0DZH/sOQaty0opTw21po0Pm0ce4idbw6mhOoJyXv3eY9TOcKLpYHanIILGQXh0SPsuBH h9azjdsiRzdAW07LSYDzEApf7YxGv371mAn1Xh9vAcvWKSIsmFS44VWE5+1YU+JMwimg mWkknrUrtlHzRjoS5eJYOwn673r7L12TP2n2Mx4tjf3LC5ho++Wq5QhNzHWG6JpcdRzo mzkuOLuA/vuC/zDAS62oUNeZMqxysc/T5vD8EvfiVKBMC5a1CYf1g6Tcih7PJmG6ir92 ekMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u0HAzwSaZgER/8QZ83ZQZ3tP3MIVx8mETnUVm6/V0ok=; b=s0BOxKNjWtg1K8KiN/gaWQ+aGzcj9eeVYVGekAD01yPtfno8be/Z0yoT7NDuIu7+vF A5XalieD62gNbUdzdJDaNK4rq0B58M4ajgvUzy60Ua78Ebb9OjXmvrahoGu7t8g3ICuP CvEj+R2EWon3vv6F+J+II5IQqefNN3VKmYH5QLAfFDwrFOxktJ+QQx9YVhhvsiBYlwQP T4YTsis947UNvaOGGvz1J8pXyCvC5dRbH4uEjFJ6YRqq5kzBK57xcFu0FCv5NLprw+YO sijsBEOEnCABta24YA9Xjg7Lqgb2RasXHRJ5ghXk1W+lmYgA/+wsgrOimrawOrNZhP3H FIpw== X-Gm-Message-State: AGi0PuY5TyHOhwhaCnvoOzewtcBcOijrr20HECX80RWtpGQIuICIZJtj 32YfzpAsSbj87ypF7c3RbRsl3qs6D27nbQ== X-Google-Smtp-Source: APiQypLUqywIpZq3/12p2mCLDdQ4JGrywsXNVqfp8FvSh3tZw2ecvsNlH/ESSfnJwGNoxP55MEbspw== X-Received: by 2002:a7b:c459:: with SMTP id l25mr796940wmi.52.1586883893294; Tue, 14 Apr 2020 10:04:53 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:52 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 25/25] iommu/arm-smmu-v3: Add support for PRI Date: Tue, 14 Apr 2020 19:02:53 +0200 Message-Id: <20200414170252.714402-26-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org For PCI devices that support it, enable the PRI capability and handle PRI Page Requests with the generic fault handler. It is enabled on demand by iommu_sva_device_init(). Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 284 +++++++++++++++++++++++++++++------- 1 file changed, 234 insertions(+), 50 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index a7becf1c5347e..8017700c33c46 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -251,6 +251,7 @@ #define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) #define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) +#define STRTAB_STE_1_PPAR (1UL << 18) #define STRTAB_STE_1_S1STALLD (1UL << 27) #define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) @@ -381,6 +382,9 @@ #define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) #define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) #define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) +#define CMDQ_PRI_1_RESP_FAILURE 0UL +#define CMDQ_PRI_1_RESP_INVALID 1UL +#define CMDQ_PRI_1_RESP_SUCCESS 2UL #define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) #define CMDQ_RESUME_0_RESP_TERM 0UL @@ -453,12 +457,6 @@ module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); -enum pri_resp { - PRI_RESP_DENY = 0, - PRI_RESP_FAIL = 1, - PRI_RESP_SUCC = 2, -}; - enum arm_smmu_msi_index { EVTQ_MSI_INDEX, GERROR_MSI_INDEX, @@ -545,7 +543,7 @@ struct arm_smmu_cmdq_ent { u32 sid; u32 ssid; u16 grpid; - enum pri_resp resp; + u8 resp; } pri; #define CMDQ_OP_RESUME 0x44 @@ -623,6 +621,7 @@ struct arm_smmu_evtq { struct arm_smmu_priq { struct arm_smmu_queue q; + struct iopf_queue *iopf; }; /* High-level stream table and context descriptor structures */ @@ -756,6 +755,8 @@ struct arm_smmu_master { unsigned int num_streams; bool ats_enabled; bool stall_enabled; + bool pri_supported; + bool prg_resp_needs_ssid; unsigned int ssid_bits; }; @@ -1034,14 +1035,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid); cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid); cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid); - switch (ent->pri.resp) { - case PRI_RESP_DENY: - case PRI_RESP_FAIL: - case PRI_RESP_SUCC: - break; - default: - return -EINVAL; - } cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp); break; case CMDQ_OP_RESUME: @@ -1621,6 +1614,7 @@ static int arm_smmu_page_response(struct device *dev, { struct arm_smmu_cmdq_ent cmd = {0}; struct arm_smmu_master *master = dev_iommu_priv_get(dev); + bool pasid_valid = resp->flags & IOMMU_PAGE_RESP_PASID_VALID; int sid = master->streams[0].id; if (master->stall_enabled) { @@ -1638,8 +1632,27 @@ static int arm_smmu_page_response(struct device *dev, default: return -EINVAL; } + } else if (master->pri_supported) { + cmd.opcode = CMDQ_OP_PRI_RESP; + cmd.substream_valid = pasid_valid && + master->prg_resp_needs_ssid; + cmd.pri.sid = sid; + cmd.pri.ssid = resp->pasid; + cmd.pri.grpid = resp->grpid; + switch (resp->code) { + case IOMMU_PAGE_RESP_FAILURE: + cmd.pri.resp = CMDQ_PRI_1_RESP_FAILURE; + break; + case IOMMU_PAGE_RESP_INVALID: + cmd.pri.resp = CMDQ_PRI_1_RESP_INVALID; + break; + case IOMMU_PAGE_RESP_SUCCESS: + cmd.pri.resp = CMDQ_PRI_1_RESP_SUCCESS; + break; + default: + return -EINVAL; + } } else { - /* TODO: insert PRI response here */ return -ENODEV; } @@ -2236,6 +2249,9 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) | FIELD_PREP(STRTAB_STE_1_STRW, strw)); + if (master->prg_resp_needs_ssid) + dst[1] |= STRTAB_STE_1_PPAR; + if (smmu->features & ARM_SMMU_FEAT_STALLS && !master->stall_enabled) dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD); @@ -2480,61 +2496,110 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt) { - u32 sid, ssid; - u16 grpid; - bool ssv, last; - - sid = FIELD_GET(PRIQ_0_SID, evt[0]); - ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]); - ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0; - last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]); - grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]); - - dev_info(smmu->dev, "unexpected PRI request received:\n"); - dev_info(smmu->dev, - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n", - sid, ssid, grpid, last ? "L" : "", - evt[0] & PRIQ_0_PERM_PRIV ? "" : "un", - evt[0] & PRIQ_0_PERM_READ ? "R" : "", - evt[0] & PRIQ_0_PERM_WRITE ? "W" : "", - evt[0] & PRIQ_0_PERM_EXEC ? "X" : "", - evt[1] & PRIQ_1_ADDR_MASK); - - if (last) { - struct arm_smmu_cmdq_ent cmd = { - .opcode = CMDQ_OP_PRI_RESP, - .substream_valid = ssv, - .pri = { - .sid = sid, - .ssid = ssid, - .grpid = grpid, - .resp = PRI_RESP_DENY, - }, + u32 sid = FIELD_PREP(PRIQ_0_SID, evt[0]); + + bool pasid_valid, last; + struct arm_smmu_master *master; + struct iommu_fault_event fault_evt = { + .fault.type = IOMMU_FAULT_PAGE_REQ, + .fault.prm = { + .pasid = FIELD_GET(PRIQ_0_SSID, evt[0]), + .grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]), + .addr = evt[1] & PRIQ_1_ADDR_MASK, + }, + }; + struct iommu_fault_page_request *pr = &fault_evt.fault.prm; + + pasid_valid = evt[0] & PRIQ_0_SSID_V; + last = evt[0] & PRIQ_0_PRG_LAST; + + /* Discard Stop PASID marker, it isn't used */ + if (!(evt[0] & (PRIQ_0_PERM_READ | PRIQ_0_PERM_WRITE)) && last) + return; + + if (last) + pr->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; + if (pasid_valid) + pr->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + if (evt[0] & PRIQ_0_PERM_READ) + pr->perm |= IOMMU_FAULT_PERM_READ; + if (evt[0] & PRIQ_0_PERM_WRITE) + pr->perm |= IOMMU_FAULT_PERM_WRITE; + if (evt[0] & PRIQ_0_PERM_EXEC) + pr->perm |= IOMMU_FAULT_PERM_EXEC; + if (evt[0] & PRIQ_0_PERM_PRIV) + pr->perm |= IOMMU_FAULT_PERM_PRIV; + + master = arm_smmu_find_master(smmu, sid); + if (WARN_ON(!master)) + return; + + if (iommu_report_device_fault(master->dev, &fault_evt)) { + /* + * No handler registered, so subsequent faults won't produce + * better results. Try to disable PRI. + */ + struct iommu_page_response resp = { + .flags = pasid_valid ? + IOMMU_PAGE_RESP_PASID_VALID : 0, + .pasid = pr->pasid, + .grpid = pr->grpid, + .code = IOMMU_PAGE_RESP_FAILURE, }; - arm_smmu_cmdq_issue_cmd(smmu, &cmd); + dev_warn(master->dev, + "PPR 0x%x:0x%llx 0x%x: nobody cared, disabling PRI\n", + pasid_valid ? pr->pasid : 0, pr->addr, pr->perm); + if (last) + arm_smmu_page_response(master->dev, NULL, &resp); } } static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) { + int num_handled = 0; + bool overflow = false; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->priq.q; struct arm_smmu_ll_queue *llq = &q->llq; + size_t queue_size = 1 << llq->max_n_shift; u64 evt[PRIQ_ENT_DWORDS]; + spin_lock(&q->wq.lock); do { - while (!queue_remove_raw(q, evt)) + while (!queue_remove_raw(q, evt)) { + spin_unlock(&q->wq.lock); arm_smmu_handle_ppr(smmu, evt); + spin_lock(&q->wq.lock); + if (++num_handled == queue_size) { + q->batch++; + wake_up_all_locked(&q->wq); + num_handled = 0; + } + } - if (queue_sync_prod_in(q) == -EOVERFLOW) + if (queue_sync_prod_in(q) == -EOVERFLOW) { dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n"); + overflow = true; + } } while (!queue_empty(llq)); /* Sync our overflow flag, as we believe we're up to speed */ llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) | Q_IDX(llq, llq->cons); queue_sync_cons_out(q); + + wake_up_all_locked(&q->wq); + spin_unlock(&q->wq.lock); + + /* + * On overflow, the SMMU might have discarded the last PPR in a group. + * There is no way to know more about it, so we have to discard all + * partial faults already queued. + */ + if (overflow) + iopf_queue_discard_partial(smmu->priq.iopf); + return IRQ_HANDLED; } @@ -2569,6 +2634,36 @@ static int arm_smmu_flush_evtq(void *cookie, struct device *dev, int pasid) return ret; } +/* + * arm_smmu_flush_priq - wait until all requests currently in the queue have + * been consumed. + * + * See arm_smmu_flush_evtq(). + */ +static int arm_smmu_flush_priq(void *cookie, struct device *dev, int pasid) +{ + int ret; + u64 batch; + bool overflow = false; + struct arm_smmu_device *smmu = cookie; + struct arm_smmu_queue *q = &smmu->priq.q; + + spin_lock(&q->wq.lock); + if (queue_sync_prod_in(q) == -EOVERFLOW) { + dev_err(smmu->dev, "priq overflow detected -- requests lost\n"); + overflow = true; + } + + batch = q->batch; + ret = wait_event_interruptible_locked(q->wq, queue_empty(&q->llq) || + q->batch >= batch + 2); + spin_unlock(&q->wq.lock); + + if (overflow) + iopf_queue_discard_partial(smmu->priq.iopf); + return ret; +} + static int arm_smmu_device_disable(struct arm_smmu_device *smmu); static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) @@ -3270,6 +3365,75 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master) pci_disable_pasid(pdev); } +static int arm_smmu_init_pri(struct arm_smmu_master *master) +{ + int pos; + struct pci_dev *pdev; + + if (!dev_is_pci(master->dev)) + return -EINVAL; + + if (!(master->smmu->features & ARM_SMMU_FEAT_PRI)) + return 0; + + pdev = to_pci_dev(master->dev); + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); + if (!pos) + return 0; + + /* If the device supports PASID and PRI, set STE.PPAR */ + if (master->ssid_bits) + master->prg_resp_needs_ssid = pci_prg_resp_pasid_required(pdev); + + master->pri_supported = true; + return 0; +} + +static int arm_smmu_enable_pri(struct arm_smmu_master *master) +{ + int ret; + struct pci_dev *pdev; + /* + * TODO: find a good inflight PPR number. We should divide the PRI queue + * by the number of PRI-capable devices, but it's impossible to know + * about future (probed late or hotplugged) devices. So we're at risk of + * dropping PPRs (and leaking pending requests in the FQ). + */ + size_t max_inflight_pprs = 16; + + if (!master->pri_supported || !master->ats_enabled) + return -ENODEV; + + pdev = to_pci_dev(master->dev); + + ret = pci_reset_pri(pdev); + if (ret) + return ret; + + ret = pci_enable_pri(pdev, max_inflight_pprs); + if (ret) { + dev_err(master->dev, "cannot enable PRI: %d\n", ret); + return ret; + } + + return 0; +} + +static void arm_smmu_disable_pri(struct arm_smmu_master *master) +{ + struct pci_dev *pdev; + + if (!dev_is_pci(master->dev)) + return; + + pdev = to_pci_dev(master->dev); + + if (!pdev->pri_enabled) + return; + + pci_disable_pri(pdev); +} + static void arm_smmu_detach_dev(struct arm_smmu_master *master) { unsigned long flags; @@ -3705,6 +3869,8 @@ static int arm_smmu_add_device(struct device *dev) smmu->features & ARM_SMMU_FEAT_STALL_FORCE) master->stall_enabled = true; + arm_smmu_init_pri(master); + ret = iommu_device_link(&smmu->iommu, dev); if (ret) goto err_disable_pasid; @@ -3741,6 +3907,7 @@ static void arm_smmu_remove_device(struct device *dev) master = dev_iommu_priv_get(dev); smmu = master->smmu; iopf_queue_remove_device(smmu->evtq.iopf, dev); + iopf_queue_remove_device(smmu->priq.iopf, dev); WARN_ON(iommu_sva_disable(dev)); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); @@ -3864,7 +4031,7 @@ static void arm_smmu_get_resv_regions(struct device *dev, static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) { - return master->stall_enabled; + return master->stall_enabled || master->pri_supported; } static bool arm_smmu_dev_has_feature(struct device *dev, @@ -3922,6 +4089,15 @@ static int arm_smmu_dev_enable_sva(struct device *dev) ret = iopf_queue_add_device(master->smmu->evtq.iopf, dev); if (ret) goto err_disable_sva; + } else if (master->pri_supported) { + ret = iopf_queue_add_device(master->smmu->priq.iopf, dev); + if (ret) + goto err_disable_sva; + + if (arm_smmu_enable_pri(master)) { + iopf_queue_remove_device(master->smmu->priq.iopf, dev); + goto err_disable_sva; + } } return 0; @@ -3957,7 +4133,9 @@ static int arm_smmu_dev_disable_feature(struct device *dev, switch (feat) { case IOMMU_DEV_FEAT_SVA: + arm_smmu_disable_pri(master); iopf_queue_remove_device(master->smmu->evtq.iopf, dev); + iopf_queue_remove_device(master->smmu->priq.iopf, dev); return iommu_sva_disable(dev); default: return -EINVAL; @@ -4101,6 +4279,11 @@ static int arm_smmu_init_queues(struct arm_smmu_device *smmu) if (!(smmu->features & ARM_SMMU_FEAT_PRI)) return 0; + smmu->priq.iopf = iopf_queue_alloc(dev_name(smmu->dev), + arm_smmu_flush_priq, smmu); + if (!smmu->priq.iopf) + return -ENOMEM; + return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD, ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS, "priq"); @@ -5078,6 +5261,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev) iommu_device_sysfs_remove(&smmu->iommu); arm_smmu_device_disable(smmu); iopf_queue_free(smmu->evtq.iopf); + iopf_queue_free(smmu->priq.iopf); return 0; }