From patchwork Thu Apr 16 20:03:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eddie James X-Patchwork-Id: 201918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 261E0C352BE for ; Thu, 16 Apr 2020 20:05:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 047C521BE5 for ; Thu, 16 Apr 2020 20:05:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731468AbgDPUFb (ORCPT ); Thu, 16 Apr 2020 16:05:31 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:2302 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730782AbgDPUEA (ORCPT ); Thu, 16 Apr 2020 16:04:00 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 03GK2ueH138702; Thu, 16 Apr 2020 16:03:52 -0400 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0b-001b2d01.pphosted.com with ESMTP id 30ewy4g58u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 16 Apr 2020 16:03:51 -0400 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 03GK0vTG023445; Thu, 16 Apr 2020 20:03:51 GMT Received: from b01cxnp23032.gho.pok.ibm.com (b01cxnp23032.gho.pok.ibm.com [9.57.198.27]) by ppma02dal.us.ibm.com with ESMTP id 30b5h7g0y5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 16 Apr 2020 20:03:51 +0000 Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com [9.57.199.111]) by b01cxnp23032.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 03GK3otX35127612 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 16 Apr 2020 20:03:50 GMT Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2BC82AC064; Thu, 16 Apr 2020 20:03:50 +0000 (GMT) Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1A64DAC062; Thu, 16 Apr 2020 20:03:48 +0000 (GMT) Received: from talon7.ibm.com (unknown [9.163.81.122]) by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP; Thu, 16 Apr 2020 20:03:47 +0000 (GMT) From: Eddie James To: linux-aspeed@lists.ozlabs.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, robh+dt@kernel.org, joel@jms.id.au, andrew@aj.id.au, eajames@linux.ibm.com Subject: [PATCH v9 3/5] soc: aspeed: xdma: Add user interface Date: Thu, 16 Apr 2020 15:03:37 -0500 Message-Id: <1587067419-5107-4-git-send-email-eajames@linux.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1587067419-5107-1-git-send-email-eajames@linux.ibm.com> References: <1587067419-5107-1-git-send-email-eajames@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-04-16_08:2020-04-14,2020-04-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 priorityscore=1501 malwarescore=0 phishscore=0 mlxscore=0 spamscore=0 suspectscore=4 lowpriorityscore=0 clxscore=1015 bulkscore=0 adultscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2004160140 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org This commits adds a miscdevice to provide a user interface to the XDMA engine. The interface provides the write operation to start DMA operations. The DMA parameters are passed as the data to the write call. The actual data to transfer is NOT passed through write. Note that both directions of DMA operation are accomplished through the write command; BMC to host and host to BMC. The XDMA driver reserves an area of physical memory for DMA operations, as the XDMA engine is restricted to accessing certain physical memory areas on some platforms. This memory forms a pool from which users can allocate pages for their usage with calls to mmap. The space allocated by a client will be the space used in the DMA operation. For an "upstream" (BMC to host) operation, the data in the client's area will be transferred to the host. For a "downstream" (host to BMC) operation, the host data will be placed in the client's memory area. Poll is also provided in order to determine when the DMA operation is complete for non-blocking IO. Signed-off-by: Eddie James Reviewed-by: Andrew Jeffery --- drivers/soc/aspeed/aspeed-xdma.c | 197 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 197 insertions(+) diff --git a/drivers/soc/aspeed/aspeed-xdma.c b/drivers/soc/aspeed/aspeed-xdma.c index c1087f3..41506ed 100644 --- a/drivers/soc/aspeed/aspeed-xdma.c +++ b/drivers/soc/aspeed/aspeed-xdma.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -230,6 +231,8 @@ struct aspeed_xdma { void *cmdq_virt; dma_addr_t cmdq_phys; struct gen_pool *pool; + + struct miscdevice misc; }; struct aspeed_xdma_client { @@ -555,6 +558,187 @@ static irqreturn_t aspeed_xdma_pcie_irq(int irq, void *arg) return IRQ_HANDLED; } +static ssize_t aspeed_xdma_write(struct file *file, const char __user *buf, + size_t len, loff_t *offset) +{ + int rc; + unsigned int num_cmds; + struct aspeed_xdma_op op; + struct aspeed_xdma_cmd cmds[2]; + struct aspeed_xdma_client *client = file->private_data; + struct aspeed_xdma *ctx = client->ctx; + + if (len != sizeof(op)) + return -EINVAL; + + rc = copy_from_user(&op, buf, len); + if (rc) + return rc; + + if (!op.len || op.len > client->size || + op.direction > ASPEED_XDMA_DIRECTION_UPSTREAM) + return -EINVAL; + + num_cmds = ctx->chip->set_cmd(ctx, cmds, &op, client->phys); + do { + rc = aspeed_xdma_start(ctx, num_cmds, cmds, !!op.direction, + client); + if (!rc) + break; + + if ((file->f_flags & O_NONBLOCK) || rc != -EBUSY) + return rc; + + rc = wait_event_interruptible(ctx->wait, + !(ctx->current_client || + ctx->in_reset)); + } while (!rc); + + if (rc) + return -EINTR; + + if (!(file->f_flags & O_NONBLOCK)) { + rc = wait_event_interruptible(ctx->wait, !client->in_progress); + if (rc) + return -EINTR; + + if (client->error) + return -EIO; + } + + return len; +} + +static __poll_t aspeed_xdma_poll(struct file *file, + struct poll_table_struct *wait) +{ + __poll_t mask = 0; + __poll_t req = poll_requested_events(wait); + struct aspeed_xdma_client *client = file->private_data; + struct aspeed_xdma *ctx = client->ctx; + + if (req & (EPOLLIN | EPOLLRDNORM)) { + if (READ_ONCE(client->in_progress)) + poll_wait(file, &ctx->wait, wait); + + if (!READ_ONCE(client->in_progress)) { + if (READ_ONCE(client->error)) + mask |= EPOLLERR; + else + mask |= EPOLLIN | EPOLLRDNORM; + } + } + + if (req & (EPOLLOUT | EPOLLWRNORM)) { + if (READ_ONCE(ctx->current_client)) + poll_wait(file, &ctx->wait, wait); + + if (!READ_ONCE(ctx->current_client)) + mask |= EPOLLOUT | EPOLLWRNORM; + } + + return mask; +} + +static void aspeed_xdma_vma_close(struct vm_area_struct *vma) +{ + int rc; + struct aspeed_xdma_client *client = vma->vm_private_data; + + rc = wait_event_interruptible(client->ctx->wait, !client->in_progress); + if (rc) + return; + + gen_pool_free(client->ctx->pool, (unsigned long)client->virt, + client->size); + + client->virt = NULL; + client->phys = 0; + client->size = 0; +} + +static const struct vm_operations_struct aspeed_xdma_vm_ops = { + .close = aspeed_xdma_vma_close, +}; + +static int aspeed_xdma_mmap(struct file *file, struct vm_area_struct *vma) +{ + int rc; + struct aspeed_xdma_client *client = file->private_data; + struct aspeed_xdma *ctx = client->ctx; + + /* restrict file to one mapping */ + if (client->size) + return -EBUSY; + + client->size = vma->vm_end - vma->vm_start; + client->virt = gen_pool_dma_alloc(ctx->pool, client->size, + &client->phys); + if (!client->virt) { + client->phys = 0; + client->size = 0; + return -ENOMEM; + } + + vma->vm_pgoff = (client->phys - ctx->mem_phys) >> PAGE_SHIFT; + vma->vm_ops = &aspeed_xdma_vm_ops; + vma->vm_private_data = client; + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + + rc = io_remap_pfn_range(vma, vma->vm_start, client->phys >> PAGE_SHIFT, + client->size, vma->vm_page_prot); + if (rc) { + dev_warn(ctx->dev, "mmap err: v[%08lx] to p[%08x], s[%08x]\n", + vma->vm_start, (u32)client->phys, client->size); + + gen_pool_free(ctx->pool, (unsigned long)client->virt, + client->size); + + client->virt = NULL; + client->phys = 0; + client->size = 0; + return rc; + } + + dev_dbg(ctx->dev, "mmap: v[%08lx] to p[%08x], s[%08x]\n", + vma->vm_start, (u32)client->phys, client->size); + + return 0; +} + +static int aspeed_xdma_open(struct inode *inode, struct file *file) +{ + struct miscdevice *misc = file->private_data; + struct aspeed_xdma *ctx = container_of(misc, struct aspeed_xdma, misc); + struct aspeed_xdma_client *client = kzalloc(sizeof(*client), + GFP_KERNEL); + + if (!client) + return -ENOMEM; + + kref_init(&client->kref); + client->ctx = ctx; + file->private_data = client; + return 0; +} + +static int aspeed_xdma_release(struct inode *inode, struct file *file) +{ + struct aspeed_xdma_client *client = file->private_data; + + kref_put(&client->kref, aspeed_xdma_client_release); + return 0; +} + +static const struct file_operations aspeed_xdma_fops = { + .owner = THIS_MODULE, + .write = aspeed_xdma_write, + .poll = aspeed_xdma_poll, + .mmap = aspeed_xdma_mmap, + .open = aspeed_xdma_open, + .release = aspeed_xdma_release, +}; + static int aspeed_xdma_init_scu(struct aspeed_xdma *ctx, struct device *dev) { struct regmap *scu = syscon_regmap_lookup_by_phandle(dev->of_node, @@ -754,6 +938,18 @@ static int aspeed_xdma_probe(struct platform_device *pdev) aspeed_xdma_init_eng(ctx); + ctx->misc.minor = MISC_DYNAMIC_MINOR; + ctx->misc.fops = &aspeed_xdma_fops; + ctx->misc.name = "aspeed-xdma"; + ctx->misc.parent = dev; + rc = misc_register(&ctx->misc); + if (rc) { + dev_err(dev, "Failed to register xdma miscdevice.\n"); + gen_pool_free(ctx->pool, (unsigned long)ctx->cmdq_virt, + XDMA_CMDQ_SIZE); + goto err; + } + /* * This interrupt could fire immediately so only request it once the * engine and driver are initialized. @@ -784,6 +980,7 @@ static int aspeed_xdma_remove(struct platform_device *pdev) { struct aspeed_xdma *ctx = platform_get_drvdata(pdev); + misc_deregister(&ctx->misc); gen_pool_free(ctx->pool, (unsigned long)ctx->cmdq_virt, XDMA_CMDQ_SIZE); From patchwork Thu Apr 16 20:03:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eddie James X-Patchwork-Id: 201917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0599C38A2C for ; Thu, 16 Apr 2020 20:05:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7CE2A21744 for ; Thu, 16 Apr 2020 20:05:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730802AbgDPUFb (ORCPT ); Thu, 16 Apr 2020 16:05:31 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:13182 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730784AbgDPUEB (ORCPT ); Thu, 16 Apr 2020 16:04:01 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 03GK2wXa138784; Thu, 16 Apr 2020 16:03:53 -0400 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0b-001b2d01.pphosted.com with ESMTP id 30ewy4g59p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 16 Apr 2020 16:03:53 -0400 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 03GK22o2024870; Thu, 16 Apr 2020 20:03:53 GMT Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29]) by ppma02dal.us.ibm.com with ESMTP id 30b5h7g0yd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 16 Apr 2020 20:03:53 +0000 Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com [9.57.199.111]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 03GK3qBI42795488 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 16 Apr 2020 20:03:52 GMT Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4D6D0AC059; Thu, 16 Apr 2020 20:03:52 +0000 (GMT) Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9F231AC05B; Thu, 16 Apr 2020 20:03:50 +0000 (GMT) Received: from talon7.ibm.com (unknown [9.163.81.122]) by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP; Thu, 16 Apr 2020 20:03:50 +0000 (GMT) From: Eddie James To: linux-aspeed@lists.ozlabs.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, robh+dt@kernel.org, joel@jms.id.au, andrew@aj.id.au, eajames@linux.ibm.com Subject: [PATCH v9 4/5] soc: aspeed: xdma: Add reset ioctl Date: Thu, 16 Apr 2020 15:03:38 -0500 Message-Id: <1587067419-5107-5-git-send-email-eajames@linux.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1587067419-5107-1-git-send-email-eajames@linux.ibm.com> References: <1587067419-5107-1-git-send-email-eajames@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-04-16_08:2020-04-14,2020-04-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 priorityscore=1501 malwarescore=0 phishscore=0 mlxscore=0 spamscore=0 suspectscore=1 lowpriorityscore=0 clxscore=1015 bulkscore=0 adultscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2004160140 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Users of the XDMA engine need a way to reset it if something goes wrong. Problems on the host side, or user error, such as incorrect host address, may result in the DMA operation never completing and no way to determine what went wrong. Therefore, add an ioctl to reset the engine so that users can recover in this situation. Signed-off-by: Eddie James Acked-by: Andrew Jeffery --- drivers/soc/aspeed/aspeed-xdma.c | 32 ++++++++++++++++++++++++++++++++ include/uapi/linux/aspeed-xdma.h | 4 ++++ 2 files changed, 36 insertions(+) diff --git a/drivers/soc/aspeed/aspeed-xdma.c b/drivers/soc/aspeed/aspeed-xdma.c index 41506ed..94ae04f 100644 --- a/drivers/soc/aspeed/aspeed-xdma.c +++ b/drivers/soc/aspeed/aspeed-xdma.c @@ -640,6 +640,37 @@ static __poll_t aspeed_xdma_poll(struct file *file, return mask; } +static long aspeed_xdma_ioctl(struct file *file, unsigned int cmd, + unsigned long param) +{ + unsigned long flags; + struct aspeed_xdma_client *client = file->private_data; + struct aspeed_xdma *ctx = client->ctx; + + switch (cmd) { + case ASPEED_XDMA_IOCTL_RESET: + spin_lock_irqsave(&ctx->engine_lock, flags); + if (ctx->in_reset) { + spin_unlock_irqrestore(&ctx->engine_lock, flags); + return 0; + } + + ctx->in_reset = true; + spin_unlock_irqrestore(&ctx->engine_lock, flags); + + if (READ_ONCE(ctx->current_client)) + dev_warn(ctx->dev, + "User reset with transfer in progress.\n"); + + aspeed_xdma_reset(ctx); + break; + default: + return -EINVAL; + } + + return 0; +} + static void aspeed_xdma_vma_close(struct vm_area_struct *vma) { int rc; @@ -734,6 +765,7 @@ static int aspeed_xdma_release(struct inode *inode, struct file *file) .owner = THIS_MODULE, .write = aspeed_xdma_write, .poll = aspeed_xdma_poll, + .unlocked_ioctl = aspeed_xdma_ioctl, .mmap = aspeed_xdma_mmap, .open = aspeed_xdma_open, .release = aspeed_xdma_release, diff --git a/include/uapi/linux/aspeed-xdma.h b/include/uapi/linux/aspeed-xdma.h index 2efaa60..3a3646f 100644 --- a/include/uapi/linux/aspeed-xdma.h +++ b/include/uapi/linux/aspeed-xdma.h @@ -4,8 +4,12 @@ #ifndef _UAPI_LINUX_ASPEED_XDMA_H_ #define _UAPI_LINUX_ASPEED_XDMA_H_ +#include #include +#define __ASPEED_XDMA_IOCTL_MAGIC 0xb7 +#define ASPEED_XDMA_IOCTL_RESET _IO(__ASPEED_XDMA_IOCTL_MAGIC, 0) + /* * aspeed_xdma_direction *