From patchwork Mon Apr 5 09:52:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 416362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 008BAC43461 for ; Mon, 5 Apr 2021 09:44:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC96861380 for ; Mon, 5 Apr 2021 09:44:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240276AbhDEJoQ (ORCPT ); Mon, 5 Apr 2021 05:44:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233572AbhDEJoO (ORCPT ); Mon, 5 Apr 2021 05:44:14 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6853EC0613E6 for ; Mon, 5 Apr 2021 02:44:08 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id f6so4343283wrv.12 for ; Mon, 05 Apr 2021 02:44:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=Vm7Pvk/X+f/DKhVj99+0h9dRaoBHwW3aK8GTtgsooWo=; b=AJFmIsH94U4b/sChGFfhxWYn8zRBKaWuRdWLz9ynvevwsJsjXLvmhBLOTUrtOEDthQ BLC4O47tyYceWvdQ5tGXo1FQCkLfbrLtRir19tOMp+bmqacRkGO6WJ94mwg27ExC3AMB lDwWY/ypvh5DGcoq/uZDIa4Ql+fzNlwSugDtIc4TlFcRkZDgrTq9v1QU+UwaFKZEEUa4 +WhzSyOebL5eGbjI7ZIoEStDePXsn+5tAzsAiyxB9Ow5XbETKIr6FbNgjEFZj7BhB18D qHRVFTIiW2V2MpZrktKetD1tqjyXbJldJxYOvKV9rZX8fakEO4b4tUVC/sssIXZAhPOX JCmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Vm7Pvk/X+f/DKhVj99+0h9dRaoBHwW3aK8GTtgsooWo=; b=ngIVPhf3tkZOXhbpNhXZeynZcdXVRoB8++6+i6iO4ft+5qHJEXfyVdmax0qTCUCdVx GYyOjUTiCGb3zJ3YBYyi0IOeLK2AhUNjiTDmTgZ+9pOkcYo441395CfCSoLUHF0H554l Z0fTy3/8j5s2puS6bkTkwo/8+HDvVAU5/Byhi6xybvHrxY47fHIOFVkj4PUUwwdkC1IK CI7iQ5VkS0YoFAFA0Fvu2lc2xkOme4sbcrL6yak88gF6fsnccXKPDLjO3TXdO3QnUD4o Jv3jNXGXyMj/CCwuiB5M6Ob7P4SyvDTTorjHl0rVkLn+8XH3WNu7fiPutXygQ/IIia2w 3uqw== X-Gm-Message-State: AOAM533Pdg3brj0vtwjSdRLrPDO0C1vwnQzlbomogQ5ZDrfcNZ31AqmW 7ZytMfm3IWzjuYCRRZwjebG3lw== X-Google-Smtp-Source: ABdhPJy7j6+1gYvQbfj88UVKD+rfI0zLCU4SI9G2WLwNRZo1I8zgay6plEZzfIpCBzHRWPW0hKB9aQ== X-Received: by 2002:adf:df8b:: with SMTP id z11mr6374686wrl.133.1617615846848; Mon, 05 Apr 2021 02:44:06 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:82c:5f0:5167:245d:4a0a:9d97]) by smtp.gmail.com with ESMTPSA id o1sm34822wrw.23.2021.04.05.02.44.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Apr 2021 02:44:06 -0700 (PDT) From: Loic Poulain To: gregkh@linuxfoundation.org, kuba@kernel.org, davem@davemloft.net Cc: linux-arm-msm@vger.kernel.org, aleksander@aleksander.es, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bjorn.andersson@linaro.org, manivannan.sadhasivam@linaro.org, Loic Poulain Subject: [PATCH net-next v9 1/2] net: Add a WWAN subsystem Date: Mon, 5 Apr 2021 11:52:48 +0200 Message-Id: <1617616369-27305-1-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This change introduces initial support for a WWAN framework. Given the complexity and heterogeneity of existing WWAN hardwares and interfaces, there is no strict definition of what a WWAN device is and how it should be represented. It's often a collection of multiple devices that perform the global WWAN feature (netdev, tty, chardev, etc). One usual way to expose modem controls and configuration is via high level protocols such as the well known AT command protocol, MBIM or QMI. The USB modems started to expose them as character devices, and user daemons such as ModemManager learnt to use them. This initial version adds the concept of WWAN port, which is a logical pipe to a modem control protocol. The protocols are rawly exposed to user via character device, allowing straigthforward support in existing tools (ModemManager, ofono...). The WWAN core takes care of the generic part, including character device management, and relies on port driver operations to receive/submit protocol data. Since the different devices exposing protocols for a same WWAN hardware do not necessarily know about each others (e.g. two different USB interfaces, PCI/MHI channel devices...) and can be created/removed in different orders, the WWAN core ensures that all WAN ports contributing to the 'whole' WWAN feature are grouped under the same virtual WWAN device, relying on the provided parent device (e.g. mhi controller, USB device). It's a 'trick' I copied from Johannes's earlier WWAN subsystem proposal. This initial version is purposely minimalist, it's essentially moving the generic part of the previously proposed mhi_wwan_ctrl driver inside a common WWAN framework, but the implementation is open and flexible enough to allow extension for further drivers. Signed-off-by: Loic Poulain --- v2: not part of the series v3: not part of the series v4: Introduce WWAN framework/subsystem v5: Specify WWAN_CORE module name in Kconfig v6: - Move to unified wwan_core.c file - Make wwan_port a device - Get rid of useless various dev lists (use wwan class) - Get rid of useless wwan dev usage counter and mutex - do not expose wwan_create_device, it's indirectly called via create_port - Increase minor count to the whole available minor range - private_data as wwan_create_port parameter v7: Fix change log (mixed up 1/2 and 2/2) v8: - Move fops implementation in wwan_core (open/read/write/poll/release) - Create wwan_port_ops - Add wwan_port_rx, wwan_port_txoff and wwan_port_txon functions - Fix code comments - skb based TX/RX v9: - Move wwan_port definition to wwan_core.c (private) - Fix checkpatch + cocci(ERR_CAST) issues - Reword commit message drivers/net/Kconfig | 2 + drivers/net/Makefile | 1 + drivers/net/wwan/Kconfig | 23 ++ drivers/net/wwan/Makefile | 7 + drivers/net/wwan/wwan_core.c | 552 +++++++++++++++++++++++++++++++++++++++++++ include/linux/wwan.h | 111 +++++++++ 6 files changed, 696 insertions(+) create mode 100644 drivers/net/wwan/Kconfig create mode 100644 drivers/net/wwan/Makefile create mode 100644 drivers/net/wwan/wwan_core.c create mode 100644 include/linux/wwan.h diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 5895905..74dc8e24 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -502,6 +502,8 @@ source "drivers/net/wan/Kconfig" source "drivers/net/ieee802154/Kconfig" +source "drivers/net/wwan/Kconfig" + config XEN_NETDEV_FRONTEND tristate "Xen network device frontend driver" depends on XEN diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 040e20b..7ffd2d0 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -68,6 +68,7 @@ obj-$(CONFIG_SUNGEM_PHY) += sungem_phy.o obj-$(CONFIG_WAN) += wan/ obj-$(CONFIG_WLAN) += wireless/ obj-$(CONFIG_IEEE802154) += ieee802154/ +obj-$(CONFIG_WWAN) += wwan/ obj-$(CONFIG_VMXNET3) += vmxnet3/ obj-$(CONFIG_XEN_NETDEV_FRONTEND) += xen-netfront.o diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig new file mode 100644 index 0000000..fc3f3a1 --- /dev/null +++ b/drivers/net/wwan/Kconfig @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Wireless WAN device configuration +# + +menuconfig WWAN + bool "Wireless WAN" + help + This section contains Wireless WAN configuration for WWAN framework + and drivers. + +if WWAN + +config WWAN_CORE + tristate "WWAN Driver Core" + help + Say Y here if you want to use the WWAN driver core. This driver + provides a common framework for WWAN drivers. + + To compile this driver as a module, choose M here: the module will be + called wwan. + +endif # WWAN diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile new file mode 100644 index 0000000..934590b --- /dev/null +++ b/drivers/net/wwan/Makefile @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the Linux WWAN device drivers. +# + +obj-$(CONFIG_WWAN_CORE) += wwan.o +wwan-objs += wwan_core.o diff --git a/drivers/net/wwan/wwan_core.c b/drivers/net/wwan/wwan_core.c new file mode 100644 index 0000000..b618b79 --- /dev/null +++ b/drivers/net/wwan/wwan_core.c @@ -0,0 +1,552 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2021, Linaro Ltd */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define WWAN_MAX_MINORS 256 /* 256 minors allowed with register_chrdev() */ + +static DEFINE_MUTEX(wwan_register_lock); /* WWAN device create|remove lock */ +static DEFINE_IDA(minors); /* minors for WWAN port chardevs */ +static DEFINE_IDA(wwan_dev_ids); /* for unique WWAN device IDs */ +static struct class *wwan_class; +static int wwan_major; + +#define to_wwan_dev(d) container_of(d, struct wwan_device, dev) +#define to_wwan_port(d) container_of(d, struct wwan_port, dev) + +/* WWAN port flags */ +#define WWAN_PORT_TX_OFF BIT(0) + +/** + * struct wwan_device - The structure that defines a WWAN device + * + * @id: WWAN device unique ID. + * @dev: Underlying device. + * @port_id: Current available port ID to pick. + */ +struct wwan_device { + unsigned int id; + struct device dev; + atomic_t port_id; +}; + +/** + * struct wwan_port - The structure that defines a WWAN port + * @type: Port type + * @start_count: Port start counter + * @flags: Store port state and capabilities + * @ops: Pointer to WWAN port operations + * @ops_lock: Protect port ops + * @dev: Underlying device + * @rxq: Buffer inbound queue + * @waitqueue: The waitqueue for port fops (read/write/poll) + */ +struct wwan_port { + enum wwan_port_type type; + unsigned int start_count; + unsigned long flags; + const struct wwan_port_ops *ops; + struct mutex ops_lock; /* Serialize ops + protect against removal */ + struct device dev; + struct sk_buff_head rxq; + wait_queue_head_t waitqueue; +}; + +static void wwan_dev_destroy(struct device *dev) +{ + struct wwan_device *wwandev = to_wwan_dev(dev); + + ida_free(&wwan_dev_ids, wwandev->id); + kfree(wwandev); +} + +static const struct device_type wwan_dev_type = { + .name = "wwan_dev", + .release = wwan_dev_destroy, +}; + +static int wwan_dev_parent_match(struct device *dev, const void *parent) +{ + return (dev->type == &wwan_dev_type && dev->parent == parent); +} + +static struct wwan_device *wwan_dev_get_by_parent(struct device *parent) +{ + struct device *dev; + + dev = class_find_device(wwan_class, NULL, parent, wwan_dev_parent_match); + if (!dev) + return ERR_PTR(-ENODEV); + + return to_wwan_dev(dev); +} + +/* This function allocates and registers a new WWAN device OR if a WWAN device + * already exist for the given parent, it gets a reference and return it. + * This function is not exported (for now), it is called indirectly via + * wwan_create_port(). + */ +static struct wwan_device *wwan_create_dev(struct device *parent) +{ + struct wwan_device *wwandev; + int err, id; + + /* The 'find-alloc-register' operation must be protected against + * concurrent execution, a WWAN device is possibly shared between + * multiple callers or concurrently unregistered from wwan_remove_dev(). + */ + mutex_lock(&wwan_register_lock); + + /* If wwandev already exists, return it */ + wwandev = wwan_dev_get_by_parent(parent); + if (!IS_ERR(wwandev)) + goto done_unlock; + + id = ida_alloc(&wwan_dev_ids, GFP_KERNEL); + if (id < 0) + goto done_unlock; + + wwandev = kzalloc(sizeof(*wwandev), GFP_KERNEL); + if (!wwandev) { + ida_free(&wwan_dev_ids, id); + goto done_unlock; + } + + wwandev->dev.parent = parent; + wwandev->dev.class = wwan_class; + wwandev->dev.type = &wwan_dev_type; + wwandev->id = id; + dev_set_name(&wwandev->dev, "wwan%d", wwandev->id); + + err = device_register(&wwandev->dev); + if (err) { + put_device(&wwandev->dev); + wwandev = NULL; + } + +done_unlock: + mutex_unlock(&wwan_register_lock); + + return wwandev; +} + +static int is_wwan_child(struct device *dev, void *data) +{ + return dev->class == wwan_class; +} + +static void wwan_remove_dev(struct wwan_device *wwandev) +{ + int ret; + + /* Prevent concurrent picking from wwan_create_dev */ + mutex_lock(&wwan_register_lock); + + /* WWAN device is created and registered (get+add) along with its first + * child port, and subsequent port registrations only grab a reference + * (get). The WWAN device must then be unregistered (del+put) along with + * its latest port, and reference simply dropped (put) otherwise. + */ + ret = device_for_each_child(&wwandev->dev, NULL, is_wwan_child); + if (!ret) + device_unregister(&wwandev->dev); + else + put_device(&wwandev->dev); + + mutex_unlock(&wwan_register_lock); +} + +/* ------- WWAN port management ------- */ + +static void wwan_port_destroy(struct device *dev) +{ + struct wwan_port *port = to_wwan_port(dev); + + ida_free(&minors, MINOR(port->dev.devt)); + skb_queue_purge(&port->rxq); + mutex_destroy(&port->ops_lock); + kfree(port); +} + +static const struct device_type wwan_port_dev_type = { + .name = "wwan_port", + .release = wwan_port_destroy, +}; + +static int wwan_port_minor_match(struct device *dev, const void *minor) +{ + return (dev->type == &wwan_port_dev_type && + MINOR(dev->devt) == *(unsigned int *)minor); +} + +static struct wwan_port *wwan_port_get_by_minor(unsigned int minor) +{ + struct device *dev; + + dev = class_find_device(wwan_class, NULL, &minor, wwan_port_minor_match); + if (!dev) + return ERR_PTR(-ENODEV); + + return to_wwan_port(dev); +} + +/* Keep aligned with wwan_port_type enum */ +static const char * const wwan_port_type_str[] = { + "AT", + "MBIM", + "QMI", + "QCDM", + "FIREHOSE" +}; + +struct wwan_port *wwan_create_port(struct device *parent, + enum wwan_port_type type, + const struct wwan_port_ops *ops, + void *drvdata) +{ + struct wwan_device *wwandev; + struct wwan_port *port; + int minor, err = -ENOMEM; + + if (type >= WWAN_PORT_MAX || !ops) + return ERR_PTR(-EINVAL); + + /* A port is always a child of a WWAN device, retrieve (allocate or + * pick) the WWAN device based on the provided parent device. + */ + wwandev = wwan_create_dev(parent); + if (IS_ERR(wwandev)) + return ERR_CAST(wwandev); + + /* A port is exposed as character device, get a minor */ + minor = ida_alloc_range(&minors, 0, WWAN_MAX_MINORS - 1, GFP_KERNEL); + if (minor < 0) + goto error_wwandev_remove; + + port = kzalloc(sizeof(*port), GFP_KERNEL); + if (!port) { + ida_free(&minors, minor); + goto error_wwandev_remove; + } + + port->type = type; + port->ops = ops; + mutex_init(&port->ops_lock); + skb_queue_head_init(&port->rxq); + init_waitqueue_head(&port->waitqueue); + + port->dev.parent = &wwandev->dev; + port->dev.class = wwan_class; + port->dev.type = &wwan_port_dev_type; + port->dev.devt = MKDEV(wwan_major, minor); + dev_set_drvdata(&port->dev, drvdata); + + /* create unique name based on wwan device id, port index and type */ + dev_set_name(&port->dev, "wwan%up%u%s", wwandev->id, + atomic_inc_return(&wwandev->port_id), + wwan_port_type_str[port->type]); + + err = device_register(&port->dev); + if (err) + goto error_put_device; + + return port; + +error_put_device: + put_device(&port->dev); +error_wwandev_remove: + wwan_remove_dev(wwandev); + + return ERR_PTR(err); +} +EXPORT_SYMBOL_GPL(wwan_create_port); + +void wwan_remove_port(struct wwan_port *port) +{ + struct wwan_device *wwandev = to_wwan_dev(port->dev.parent); + + mutex_lock(&port->ops_lock); + if (port->start_count) + port->ops->stop(port); + port->ops = NULL; /* Prevent any new port operations (e.g. from fops) */ + mutex_unlock(&port->ops_lock); + + wake_up_interruptible(&port->waitqueue); + + skb_queue_purge(&port->rxq); + dev_set_drvdata(&port->dev, NULL); + device_unregister(&port->dev); + + /* Release related wwan device */ + wwan_remove_dev(wwandev); +} +EXPORT_SYMBOL_GPL(wwan_remove_port); + +void wwan_port_rx(struct wwan_port *port, struct sk_buff *skb) +{ + skb_queue_tail(&port->rxq, skb); + wake_up_interruptible(&port->waitqueue); +} +EXPORT_SYMBOL_GPL(wwan_port_rx); + +void wwan_port_txon(struct wwan_port *port) +{ + clear_bit(WWAN_PORT_TX_OFF, &port->flags); + wake_up_interruptible(&port->waitqueue); +} +EXPORT_SYMBOL_GPL(wwan_port_txon); + +void wwan_port_txoff(struct wwan_port *port) +{ + set_bit(WWAN_PORT_TX_OFF, &port->flags); +} +EXPORT_SYMBOL_GPL(wwan_port_txoff); + +void *wwan_port_get_drvdata(struct wwan_port *port) +{ + return dev_get_drvdata(&port->dev); +} +EXPORT_SYMBOL_GPL(wwan_port_get_drvdata); + +static int wwan_port_op_start(struct wwan_port *port) +{ + int ret = 0; + + mutex_lock(&port->ops_lock); + if (!port->ops) { /* Port got unplugged */ + ret = -ENODEV; + goto out_unlock; + } + + /* If port is already started, don't start again */ + if (!port->start_count) + ret = port->ops->start(port); + + if (!ret) + port->start_count++; + +out_unlock: + mutex_unlock(&port->ops_lock); + + return ret; +} + +static void wwan_port_op_stop(struct wwan_port *port) +{ + mutex_lock(&port->ops_lock); + port->start_count--; + if (port->ops && !port->start_count) + port->ops->stop(port); + mutex_unlock(&port->ops_lock); +} + +static int wwan_port_op_tx(struct wwan_port *port, struct sk_buff *skb) +{ + int ret; + + mutex_lock(&port->ops_lock); + if (!port->ops) { /* Port got unplugged */ + ret = -ENODEV; + goto out_unlock; + } + + ret = port->ops->tx(port, skb); + +out_unlock: + mutex_unlock(&port->ops_lock); + + return ret; +} + +static bool is_read_blocked(struct wwan_port *port) +{ + return skb_queue_empty(&port->rxq) && port->ops; +} + +static bool is_write_blocked(struct wwan_port *port) +{ + return test_bit(WWAN_PORT_TX_OFF, &port->flags) && port->ops; +} + +static int wwan_wait_rx(struct wwan_port *port, bool nonblock) +{ + if (!is_read_blocked(port)) + return 0; + + if (nonblock) + return -EAGAIN; + + if (wait_event_interruptible(port->waitqueue, !is_read_blocked(port))) + return -ERESTARTSYS; + + return 0; +} + +static int wwan_wait_tx(struct wwan_port *port, bool nonblock) +{ + if (!is_write_blocked(port)) + return 0; + + if (nonblock) + return -EAGAIN; + + if (wait_event_interruptible(port->waitqueue, !is_write_blocked(port))) + return -ERESTARTSYS; + + return 0; +} + +static int wwan_port_fops_open(struct inode *inode, struct file *file) +{ + struct wwan_port *port; + int err = 0; + + port = wwan_port_get_by_minor(iminor(inode)); + if (IS_ERR(port)) + return PTR_ERR(port); + + file->private_data = port; + stream_open(inode, file); + + err = wwan_port_op_start(port); + if (err) + put_device(&port->dev); + + return err; +} + +static int wwan_port_fops_release(struct inode *inode, struct file *filp) +{ + struct wwan_port *port = filp->private_data; + + wwan_port_op_stop(port); + put_device(&port->dev); + + return 0; +} + +static ssize_t wwan_port_fops_read(struct file *filp, char __user *buf, + size_t count, loff_t *ppos) +{ + struct wwan_port *port = filp->private_data; + struct sk_buff *skb; + size_t copied; + int ret; + + ret = wwan_wait_rx(port, !!(filp->f_flags & O_NONBLOCK)); + if (ret) + return ret; + + skb = skb_dequeue(&port->rxq); + if (!skb) + return -EIO; + + copied = min_t(size_t, count, skb->len); + if (copy_to_user(buf, skb->data, copied)) { + kfree_skb(skb); + return -EFAULT; + } + skb_pull(skb, copied); + + /* skb is not fully consumed, keep it in the queue */ + if (skb->len) + skb_queue_head(&port->rxq, skb); + else + consume_skb(skb); + + return copied; +} + +static ssize_t wwan_port_fops_write(struct file *filp, const char __user *buf, + size_t count, loff_t *offp) +{ + struct wwan_port *port = filp->private_data; + struct sk_buff *skb; + int ret; + + ret = wwan_wait_tx(port, !!(filp->f_flags & O_NONBLOCK)); + if (ret) + return ret; + + skb = alloc_skb(count, GFP_KERNEL); + if (!skb) + return -ENOMEM; + + if (copy_from_user(skb_put(skb, count), buf, count)) { + kfree_skb(skb); + return -EFAULT; + } + + ret = wwan_port_op_tx(port, skb); + if (ret) { + kfree_skb(skb); + return ret; + } + + return count; +} + +static __poll_t wwan_port_fops_poll(struct file *filp, poll_table *wait) +{ + struct wwan_port *port = filp->private_data; + __poll_t mask = 0; + + poll_wait(filp, &port->waitqueue, wait); + + if (!is_write_blocked(port)) + mask |= EPOLLOUT | EPOLLWRNORM; + if (!is_read_blocked(port)) + mask |= EPOLLIN | EPOLLRDNORM; + + return mask; +} + +static const struct file_operations wwan_port_fops = { + .owner = THIS_MODULE, + .open = wwan_port_fops_open, + .release = wwan_port_fops_release, + .read = wwan_port_fops_read, + .write = wwan_port_fops_write, + .poll = wwan_port_fops_poll, + .llseek = noop_llseek, +}; + +static int __init wwan_init(void) +{ + wwan_class = class_create(THIS_MODULE, "wwan"); + if (IS_ERR(wwan_class)) + return PTR_ERR(wwan_class); + + /* chrdev used for wwan ports */ + wwan_major = register_chrdev(0, "wwan_port", &wwan_port_fops); + if (wwan_major < 0) { + class_destroy(wwan_class); + return wwan_major; + } + + return 0; +} + +static void __exit wwan_exit(void) +{ + unregister_chrdev(wwan_major, "wwan_port"); + class_destroy(wwan_class); +} + +module_init(wwan_init); +module_exit(wwan_exit); + +MODULE_AUTHOR("Loic Poulain "); +MODULE_DESCRIPTION("WWAN core"); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/wwan.h b/include/linux/wwan.h new file mode 100644 index 0000000..f3f3f94 --- /dev/null +++ b/include/linux/wwan.h @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (c) 2021, Linaro Ltd */ + +#ifndef __WWAN_H +#define __WWAN_H + +#include +#include +#include + +/** + * enum wwan_port_type - WWAN port types + * @WWAN_PORT_AT: AT commands + * @WWAN_PORT_MBIM: Mobile Broadband Interface Model control + * @WWAN_PORT_QMI: Qcom modem/MSM interface for modem control + * @WWAN_PORT_QCDM: Qcom Modem diagnostic interface + * @WWAN_PORT_FIREHOSE: XML based command protocol + * @WWAN_PORT_MAX + */ +enum wwan_port_type { + WWAN_PORT_AT, + WWAN_PORT_MBIM, + WWAN_PORT_QMI, + WWAN_PORT_QCDM, + WWAN_PORT_FIREHOSE, + WWAN_PORT_MAX, +}; + +struct wwan_port; + +/** struct wwan_port_ops - The WWAN port operations + * @start: The routine for starting the WWAN port device. + * @stop: The routine for stopping the WWAN port device. + * @tx: The routine that sends WWAN port protocol data to the device. + * + * The wwan_port_ops structure contains a list of low-level operations + * that control a WWAN port device. All functions are mandatory. + */ +struct wwan_port_ops { + int (*start)(struct wwan_port *port); + void (*stop)(struct wwan_port *port); + int (*tx)(struct wwan_port *port, struct sk_buff *skb); +}; + +/** + * wwan_create_port - Add a new WWAN port + * @parent: Device to use as parent and shared by all WWAN ports + * @type: WWAN port type + * @ops: WWAN port operations + * @drvdata: Pointer to caller driver data + * + * Allocate and register a new WWAN port. The port will be automatically exposed + * to user as a character device and attached to the right virtual WWAN device, + * based on the parent pointer. The parent pointer is the device shared by all + * components of a same WWAN modem (e.g. USB dev, PCI dev, MHI controller...). + * + * drvdata will be placed in the WWAN port device driver data and can be + * retrieved with wwan_port_get_drvdata(). + * + * This function must be balanced with a call to wwan_remove_port(). + * + * Returns a valid pointer to wwan_port on success or PTR_ERR on failure + */ +struct wwan_port *wwan_create_port(struct device *parent, + enum wwan_port_type type, + const struct wwan_port_ops *ops, + void *drvdata); + +/** + * wwan_remove_port - Remove a WWAN port + * @port: WWAN port to remove + * + * Remove a previously created port. + */ +void wwan_remove_port(struct wwan_port *port); + +/** + * wwan_port_rx - Receive data from the WWAN port + * @port: WWAN port for which data is received + * @skb: Pointer to the rx buffer + * + * A port driver calls this function upon data reception (MBIM, AT...). + */ +void wwan_port_rx(struct wwan_port *port, struct sk_buff *skb); + +/** + * wwan_port_txoff - Stop TX on WWAN port + * @port: WWAN port for which TX must be stopped + * + * Used for TX flow control, a port driver calls this function to indicate TX + * is temporary unavailable (e.g. due to ring buffer fullness). + */ +void wwan_port_txoff(struct wwan_port *port); + + +/** + * wwan_port_txon - Restart TX on WWAN port + * @port: WWAN port for which TX must be restarted + * + * Used for TX flow control, a port driver calls this function to indicate TX + * is available again. + */ +void wwan_port_txon(struct wwan_port *port); + +/** + * wwan_port_get_drvdata - Retrieve driver data from a WWAN port + * @port: Related WWAN port + */ +void *wwan_port_get_drvdata(struct wwan_port *port); + +#endif /* __WWAN_H */ From patchwork Mon Apr 5 09:52:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 415549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E11B5C433ED for ; Mon, 5 Apr 2021 09:44:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B53E96128D for ; Mon, 5 Apr 2021 09:44:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240328AbhDEJoT (ORCPT ); Mon, 5 Apr 2021 05:44:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240207AbhDEJoP (ORCPT ); Mon, 5 Apr 2021 05:44:15 -0400 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A115CC061756 for ; Mon, 5 Apr 2021 02:44:09 -0700 (PDT) Received: by mail-wr1-x433.google.com with SMTP id j18so10424836wra.2 for ; Mon, 05 Apr 2021 02:44:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rEfxwFa9aRihWqiEpLVZwH8m4iHNhizOOU1k1FZxLo0=; b=IR7tRUGAvfnlr+ba/aq0K36SRPpxC0i2TJM8b+soQq/Ni77TFF/0Rw8/rnL5VyM04w m58HZp0Qzi4zfyqMZrzK5q0FbuSoUf1qtx9ShP9uKWKV6E/0HX16RrNBUsUFvL6Vlsuo Y87SFNXWT14oAZAV9oadSHMGrq2XEXjbTlt6bM3FsrGwxIEIcZRf9KdFeJkdwxj0KCz+ AaPZVHyyO0cDhlaxjLMvO8VLxIF9dTiuMRBL2VJ2IkbTdr3xvZQAoQl7L3g2cP5ZPP1l FrxEj7Obw0R47+6JyZzurz5pmtouQb1wyMh/qYsSC5nLBcxPOAYBaYIhCC87Kfw+VTFd IUgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rEfxwFa9aRihWqiEpLVZwH8m4iHNhizOOU1k1FZxLo0=; b=i2NJaYbfl7YgKjWfROAlt7bWQQ6DugYroCE2dZDaa5DR5sXx9QPSbn/PAXbLgBv93V 9k1z6B9P2+ZGXZW3RFTuRcUB2OFx3j5ZhDrQLxv2YMH0+YtPx9KbCnaSzyfunUkGLHvC MJHPKfLBxFA82ydyHeGQ9sD8PnYkuHeqcG1/1Zb9XNItTLapolwziKVZ63lVXkYcwaRZ mC42/I2eydkdAU9W/MKpZjMaFp78TuSCS0/P9v03pGbYNvLXVRYX1nzgMrvA3ghvFFfJ NIMkfGT6tZJEWIJIavKkDgwZ2NZvwNdIH33lA9ku0kvMagTmngbC9frxjSnhryBjgs7+ Jhdw== X-Gm-Message-State: AOAM53239BaoiVSCfBqbrcckd8FvV1V10fxUfBhYdgqpreFXYlPOOeJ4 iD1r07qDHRr4g6G3gb3klpKnglaZadi/YEtc X-Google-Smtp-Source: ABdhPJydMrIUR/vMcvjDex9B5rdo5P4+uAJhz6egZVxyWfxWGGFv+tOGP8puT4BblKljcc89+gIz2g== X-Received: by 2002:a5d:47c4:: with SMTP id o4mr27993128wrc.138.1617615848184; Mon, 05 Apr 2021 02:44:08 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:82c:5f0:5167:245d:4a0a:9d97]) by smtp.gmail.com with ESMTPSA id o1sm34822wrw.23.2021.04.05.02.44.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Apr 2021 02:44:07 -0700 (PDT) From: Loic Poulain To: gregkh@linuxfoundation.org, kuba@kernel.org, davem@davemloft.net Cc: linux-arm-msm@vger.kernel.org, aleksander@aleksander.es, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bjorn.andersson@linaro.org, manivannan.sadhasivam@linaro.org, Loic Poulain Subject: [PATCH net-next v9 2/2] net: Add Qcom WWAN control driver Date: Mon, 5 Apr 2021 11:52:49 +0200 Message-Id: <1617616369-27305-2-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1617616369-27305-1-git-send-email-loic.poulain@linaro.org> References: <1617616369-27305-1-git-send-email-loic.poulain@linaro.org> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The MHI WWWAN control driver allows MHI QCOM-based modems to expose different modem control protocols/ports via the WWAN framework, so that userspace modem tools or daemon (e.g. ModemManager) can control WWAN config and state (APN config, SMS, provider selection...). A QCOM-based modem can expose one or several of the following protocols: - AT: Well known AT commands interactive protocol (microcom, minicom...) - MBIM: Mobile Broadband Interface Model (libmbim, mbimcli) - QMI: QCOM MSM/Modem Interface (libqmi, qmicli) - QCDM: QCOM Modem diagnostic interface (libqcdm) - FIREHOSE: XML-based protocol for Modem firmware management (qmi-firmware-update) Note that this patch is mostly a rework of the earlier MHI UCI tentative that was a generic interface for accessing MHI bus from userspace. As suggested, this new version is WWAN specific and is dedicated to only expose channels used for controlling a modem, and for which related opensource userpace support exist. Signed-off-by: Loic Poulain --- v2: update copyright (2021) v3: Move driver to dedicated drivers/net/wwan directory v4: Rework to use wwan framework instead of self cdev management v5: Fix errors/typos in Kconfig v6: - Move to new wwan interface, No need dedicated call to wwan_dev_create - Cleanup code (remove legacy from mhi_uci, unused defines/vars...) - Remove useless write_lock mutex - Add mhi_wwan_wait_writable and mhi_wwan_wait_dlqueue_lock_irq helpers - Rework locking - Add MHI_WWAN_TX_FULL flag - Add support for NONBLOCK read/write v7: Fix change log (mixed up 1/2 and 2/2) v8: - Implement wwan_port_ops instead of fops - Remove all obsolete elements (kref, lock, waitqueues) - Add tracking of RX buffer budget - Use WWAN TX flow control function to stop TX when MHI queue is full v9: - Add proper locking for rx_budget + rx_refill scheduling - Fix cocci errors (use-after-free, ERR_CAST) drivers/net/wwan/Kconfig | 14 ++ drivers/net/wwan/Makefile | 2 + drivers/net/wwan/mhi_wwan_ctrl.c | 282 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 298 insertions(+) create mode 100644 drivers/net/wwan/mhi_wwan_ctrl.c diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig index fc3f3a1..7ad1920 100644 --- a/drivers/net/wwan/Kconfig +++ b/drivers/net/wwan/Kconfig @@ -20,4 +20,18 @@ config WWAN_CORE To compile this driver as a module, choose M here: the module will be called wwan. +config MHI_WWAN_CTRL + tristate "MHI WWAN control driver for QCOM-based PCIe modems" + select WWAN_CORE + depends on MHI_BUS + help + MHI WWAN CTRL allows QCOM-based PCIe modems to expose different modem + control protocols/ports to userspace, including AT, MBIM, QMI, DIAG + and FIREHOSE. These protocols can be accessed directly from userspace + (e.g. AT commands) or via libraries/tools (e.g. libmbim, libqmi, + libqcdm...). + + To compile this driver as a module, choose M here: the module will be + called mhi_wwan_ctrl. + endif # WWAN diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile index 934590b..556cd90 100644 --- a/drivers/net/wwan/Makefile +++ b/drivers/net/wwan/Makefile @@ -5,3 +5,5 @@ obj-$(CONFIG_WWAN_CORE) += wwan.o wwan-objs += wwan_core.o + +obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o diff --git a/drivers/net/wwan/mhi_wwan_ctrl.c b/drivers/net/wwan/mhi_wwan_ctrl.c new file mode 100644 index 0000000..479bb92 --- /dev/null +++ b/drivers/net/wwan/mhi_wwan_ctrl.c @@ -0,0 +1,282 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2021, Linaro Ltd */ +#include +#include +#include +#include +#include + +/* MHI wwan flags */ +#define MHI_WWAN_DL_CAP BIT(0) +#define MHI_WWAN_UL_CAP BIT(1) +#define MHI_WWAN_RX_REFILL BIT(2) + +#define MHI_WWAN_MAX_MTU 0x8000 + +struct mhi_wwan_dev { + /* Lower level is a mhi dev, upper level is a wwan port */ + struct mhi_device *mhi_dev; + struct wwan_port *wwan_port; + + /* State and capabilities */ + unsigned long flags; + size_t mtu; + + /* Protect against concurrent TX and TX-completion (bh) */ + spinlock_t tx_lock; + + /* Protect RX budget and rx_refill scheduling */ + spinlock_t rx_lock; + struct work_struct rx_refill; + + /* RX budget, initially sets to the size of the MHI RX queue, is used to + * limit the number of allocated and queued packets. It is decremented + * on data queueing and incremented on data release. + */ + unsigned int rx_budget; +}; + +/* Increment RX budget and schedule RX refill if necessary */ +static void mhi_wwan_rx_budget_inc(struct mhi_wwan_dev *mhiwwan) +{ + spin_lock(&mhiwwan->rx_lock); + + mhiwwan->rx_budget++; + + if (test_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags)) + schedule_work(&mhiwwan->rx_refill); + + spin_unlock(&mhiwwan->rx_lock); +} + +/* Decrement RX budget if non-zero and return true on success */ +static bool mhi_wwan_rx_budget_dec(struct mhi_wwan_dev *mhiwwan) +{ + bool ret = false; + + spin_lock(&mhiwwan->rx_lock); + + if (mhiwwan->rx_budget) + mhiwwan->rx_budget--; + + if (mhiwwan->rx_budget && test_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags)) + ret = true; + + spin_unlock(&mhiwwan->rx_lock); + + return ret; +} + +static void __mhi_skb_destructor(struct sk_buff *skb) +{ + /* RX buffer has been consumed, increase the allowed budget */ + mhi_wwan_rx_budget_inc(skb_shinfo(skb)->destructor_arg); +} + +static void mhi_wwan_ctrl_refill_work(struct work_struct *work) +{ + struct mhi_wwan_dev *mhiwwan = container_of(work, struct mhi_wwan_dev, rx_refill); + struct mhi_device *mhi_dev = mhiwwan->mhi_dev; + + while (mhi_wwan_rx_budget_dec(mhiwwan)) { + struct sk_buff *skb; + + skb = alloc_skb(mhiwwan->mtu, GFP_KERNEL); + if (!skb) { + mhi_wwan_rx_budget_inc(mhiwwan); + break; + } + + /* To prevent unlimited buffer allocation if nothing consumes + * the RX buffers (passed to WWAN core), track their lifespan + * to not allocate more than allowed budget. + */ + skb->destructor = __mhi_skb_destructor; + skb_shinfo(skb)->destructor_arg = mhiwwan; + + if (mhi_queue_skb(mhi_dev, DMA_FROM_DEVICE, skb, mhiwwan->mtu, MHI_EOT)) { + dev_err(&mhi_dev->dev, "Failed to queue buffer\n"); + kfree_skb(skb); + break; + } + } +} + +static int mhi_wwan_ctrl_start(struct wwan_port *port) +{ + struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port); + int ret; + + /* Start mhi device's channel(s) */ + ret = mhi_prepare_for_transfer(mhiwwan->mhi_dev); + if (ret) + return ret; + + /* Don't allocate more buffers than MHI channel queue size */ + mhiwwan->rx_budget = mhi_get_free_desc_count(mhiwwan->mhi_dev, DMA_FROM_DEVICE); + + /* Add buffers to the MHI inbound queue */ + if (test_bit(MHI_WWAN_DL_CAP, &mhiwwan->flags)) { + set_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags); + mhi_wwan_ctrl_refill_work(&mhiwwan->rx_refill); + } + + return 0; +} + +static void mhi_wwan_ctrl_stop(struct wwan_port *port) +{ + struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port); + + spin_lock(&mhiwwan->rx_lock); + clear_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags); + spin_unlock(&mhiwwan->rx_lock); + + cancel_work_sync(&mhiwwan->rx_refill); + + mhi_unprepare_from_transfer(mhiwwan->mhi_dev); +} + +static int mhi_wwan_ctrl_tx(struct wwan_port *port, struct sk_buff *skb) +{ + struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port); + int ret; + + if (skb->len > mhiwwan->mtu) + return -EMSGSIZE; + + if (!test_bit(MHI_WWAN_UL_CAP, &mhiwwan->flags)) + return -EOPNOTSUPP; + + /* Queue the packet for MHI transfer and check fullness of the queue */ + spin_lock_bh(&mhiwwan->tx_lock); + ret = mhi_queue_skb(mhiwwan->mhi_dev, DMA_TO_DEVICE, skb, skb->len, MHI_EOT); + if (mhi_queue_is_full(mhiwwan->mhi_dev, DMA_TO_DEVICE)) + wwan_port_txoff(port); + spin_unlock_bh(&mhiwwan->tx_lock); + + return ret; +} + +static const struct wwan_port_ops wwan_pops = { + .start = mhi_wwan_ctrl_start, + .stop = mhi_wwan_ctrl_stop, + .tx = mhi_wwan_ctrl_tx, +}; + +static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct mhi_wwan_dev *mhiwwan = dev_get_drvdata(&mhi_dev->dev); + struct wwan_port *port = mhiwwan->wwan_port; + struct sk_buff *skb = mhi_result->buf_addr; + + dev_dbg(&mhi_dev->dev, "%s: status: %d xfer_len: %zu\n", __func__, + mhi_result->transaction_status, mhi_result->bytes_xferd); + + /* MHI core has done with the buffer, release it */ + consume_skb(skb); + + /* There is likely new slot available in the MHI queue, re-allow TX */ + spin_lock_bh(&mhiwwan->tx_lock); + if (!mhi_queue_is_full(mhiwwan->mhi_dev, DMA_TO_DEVICE)) + wwan_port_txon(port); + spin_unlock_bh(&mhiwwan->tx_lock); +} + +static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct mhi_wwan_dev *mhiwwan = dev_get_drvdata(&mhi_dev->dev); + struct wwan_port *port = mhiwwan->wwan_port; + struct sk_buff *skb = mhi_result->buf_addr; + + dev_dbg(&mhi_dev->dev, "%s: status: %d receive_len: %zu\n", __func__, + mhi_result->transaction_status, mhi_result->bytes_xferd); + + if (mhi_result->transaction_status && + mhi_result->transaction_status != -EOVERFLOW) { + kfree_skb(skb); + return; + } + + /* MHI core does not update skb->len, do it before forward */ + skb_put(skb, mhi_result->bytes_xferd); + wwan_port_rx(port, skb); + + /* Do not increment rx budget nor refill RX buffers now, wait for the + * buffer to be consumed. Done from __mhi_skb_destructor(). + */ +} + +static int mhi_wwan_ctrl_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct mhi_controller *cntrl = mhi_dev->mhi_cntrl; + struct mhi_wwan_dev *mhiwwan; + struct wwan_port *port; + + mhiwwan = kzalloc(sizeof(*mhiwwan), GFP_KERNEL); + if (!mhiwwan) + return -ENOMEM; + + mhiwwan->mhi_dev = mhi_dev; + mhiwwan->mtu = MHI_WWAN_MAX_MTU; + INIT_WORK(&mhiwwan->rx_refill, mhi_wwan_ctrl_refill_work); + spin_lock_init(&mhiwwan->tx_lock); + spin_lock_init(&mhiwwan->rx_lock); + + if (mhi_dev->dl_chan) + set_bit(MHI_WWAN_DL_CAP, &mhiwwan->flags); + if (mhi_dev->ul_chan) + set_bit(MHI_WWAN_UL_CAP, &mhiwwan->flags); + + dev_set_drvdata(&mhi_dev->dev, mhiwwan); + + /* Register as a wwan port, id->driver_data contains wwan port type */ + port = wwan_create_port(&cntrl->mhi_dev->dev, id->driver_data, + &wwan_pops, mhiwwan); + if (IS_ERR(mhiwwan->wwan_port)) { + kfree(mhiwwan); + return PTR_ERR(port); + } + + mhiwwan->wwan_port = port; + + return 0; +}; + +static void mhi_wwan_ctrl_remove(struct mhi_device *mhi_dev) +{ + struct mhi_wwan_dev *mhiwwan = dev_get_drvdata(&mhi_dev->dev); + + wwan_remove_port(mhiwwan->wwan_port); + kfree(mhiwwan); +} + +static const struct mhi_device_id mhi_wwan_ctrl_match_table[] = { + { .chan = "DUN", .driver_data = WWAN_PORT_AT }, + { .chan = "MBIM", .driver_data = WWAN_PORT_MBIM }, + { .chan = "QMI", .driver_data = WWAN_PORT_QMI }, + { .chan = "DIAG", .driver_data = WWAN_PORT_QCDM }, + { .chan = "FIREHOSE", .driver_data = WWAN_PORT_FIREHOSE }, + {}, +}; +MODULE_DEVICE_TABLE(mhi, mhi_wwan_ctrl_match_table); + +static struct mhi_driver mhi_wwan_ctrl_driver = { + .id_table = mhi_wwan_ctrl_match_table, + .remove = mhi_wwan_ctrl_remove, + .probe = mhi_wwan_ctrl_probe, + .ul_xfer_cb = mhi_ul_xfer_cb, + .dl_xfer_cb = mhi_dl_xfer_cb, + .driver = { + .name = "mhi_wwan_ctrl", + }, +}; + +module_mhi_driver(mhi_wwan_ctrl_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("MHI WWAN CTRL Driver"); +MODULE_AUTHOR("Loic Poulain ");