From patchwork Thu Feb 4 18:17:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 376321 Delivered-To: patch@linaro.org Received: by 2002:a17:906:48d2:0:0:0:0 with SMTP id d18csp1514665ejt; Thu, 4 Feb 2021 10:16:57 -0800 (PST) X-Google-Smtp-Source: ABdhPJyp5phPnyxAlAQsLNcWtUTeei29tu0ytHw0LgIwLmBDJgklfHqwiL0gFpRjrCNVCBMP2lt6 X-Received: by 2002:a05:6402:3114:: with SMTP id dc20mr212272edb.197.1612462617305; Thu, 04 Feb 2021 10:16:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612462617; cv=none; d=google.com; s=arc-20160816; b=g3dqAG+FxxViPfc72p9FYOv3OhzZAXYvMfSPdubNT4Do8ceODW/UBkmsUck2eiBkKV Cs4my1iCZEZuOJPYoz5EsD5ABUt90beUPtBFAYOcZPJY2Dc3E9BE7v0I7zP4Ruoa99yh SHTD+8leCc4FS8LetbuOQj/mGBLXyXt2UyB+f6bQEQ+t/5Yflgt016OKONDP4g8MgFcT w40jmcCX/DWHpujwq8CkourDdibFZVn7YgtTFLaFPfY4JUTHpCPRXn2YbiDXvztZ2I31 Iar3cdOyamLAPHgdxkejh9wMNqTrYq8q60uK1y6P9qjrarcfYHA/BE5IrjoTeXwbk6pn 1C7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=D1xoggkKUYPm/up9HUqtS9Xiw3MkdDwvnq8xK4ouEnc=; b=bigLCbhXcWFo/Tn/PDBPzlpl7AG4UeZYs25AyDv3d+vBBvkT1dUdsyf9U3nE9s9MGo FAmt36d4dQ2oAk64XDXMbjTPVZat97WSW4tGFu5fChQb0ye0ciE/yGf3p8fEwOFWCHZW JKj6EravLbAjpGWhn/XzIGrhduzFyIRBXwPMkoIdROh9atoWCRUE2FJyKbwHh3UgU4kE HneSf4m9a09Lcd54stl7mccHo96B8G8pLTsJjOSbaVX/HnvmTeVWyTdBMgLWiqqJ+X+M kvff3xXq0UMmGOW8jFhII920UzKCT5sHIkqtNjfU5/2pjX16m6mgsy7VIG813p0gfmz5 4Vyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LTIc3u6c; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g1si3752039ejc.697.2021.02.04.10.16.56; Thu, 04 Feb 2021 10:16:57 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LTIc3u6c; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238959AbhBDSMh (ORCPT + 7 others); Thu, 4 Feb 2021 13:12:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238846AbhBDSKn (ORCPT ); Thu, 4 Feb 2021 13:10:43 -0500 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BD79C061788 for ; Thu, 4 Feb 2021 10:10:03 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id m1so3819493wml.2 for ; Thu, 04 Feb 2021 10:10:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=D1xoggkKUYPm/up9HUqtS9Xiw3MkdDwvnq8xK4ouEnc=; b=LTIc3u6c3DZDhhChstnwspg8FIR9Po9sRU9pZFQwA3omWkQrHG3MDZqIJ1QaDnSzmX szdoQOyPmGKTK+GUvhs6HEUpyZNryCWxC/I6etoP1WhemeOlfCWoJCbAMK2eH9aRCyCL cctCTtTm+3bqT0pE09ekflQsMA5L27vG4u/cAuXteNbAZKV0DWnTeXkVvJ/mQkHtBQkm 3CNMCeWYjtpS93XiGklxWcHM0iBWZgD2uF6ARq74wb8d42n8Ve5ggQepuSNzDxG0m96y Jor92YIZXk+Ip08lF7ius8oI9i1M6c/ZwuIhIddPY6qccepzfgBybleCcYTPPpu5dMqD 0g7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=D1xoggkKUYPm/up9HUqtS9Xiw3MkdDwvnq8xK4ouEnc=; b=Xlt+WH0D0NqOpNGUAZFxZzKsYfw0WN1rt0zn2ZfON/iDNh6bZm1XcV8A5yVzQILnX3 D7LtZQorlyEsGsipYCKO189IdhUF8KZVco2hemmac6T2cTO59iDD7wH6xHb26BARw5Lx LlwwQZi+lClz6nzUbAtuqTrYS81I6L0K5y5SmtYUYymqvc2eDZJVzmGwj/ZInW53VrAA yEjDgAdKlyE7xTBzj+a/33ZYcU3LQJwwpQQLpSyTxfqY+7AGqiOJi6WLN91fbPYQ/zaj gsHNI5A5oq3xlJ2OUQac4o8cVg9dc6eYGlhhrQT4b0hvjskUqXLBETvo+mZuxCdtgrUw 826A== X-Gm-Message-State: AOAM531pG8iQvIWAPpYcLbtXraPUe26sCVS2JEOUZSRNgXeEr2eI/gD4 AgZVvOuO0eeP54IEAcg6OUZY+A== X-Received: by 2002:a1c:2d48:: with SMTP id t69mr366927wmt.124.1612462201837; Thu, 04 Feb 2021 10:10:01 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id m6sm6313746wmq.13.2021.02.04.10.10.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 10:10:01 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: netdev@vger.kernel.org, bjorn@mork.no, dcbw@redhat.com, carl.yin@quectel.com, mpearson@lenovo.com, cchen50@lenovo.com, jwjiang@lenovo.com, ivan.zhang@quectel.com, naveen.kumar@quectel.com, Loic Poulain Subject: [PATCH net-next v3 1/5] net: mhi: Add protocol support Date: Thu, 4 Feb 2021 19:17:37 +0100 Message-Id: <1612462661-23045-2-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> References: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org MHI can transport different protocols, some are handled at upper level, like IP and QMAP(rmnet/netlink), but others will need to be inside MHI net driver, like mbim. This change adds support for protocol rx and tx_fixup callbacks registration, that can be used to encode/decode the targeted protocol. Signed-off-by: Loic Poulain --- drivers/net/mhi_net.c | 69 ++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 57 insertions(+), 12 deletions(-) -- 2.7.4 diff --git a/drivers/net/mhi_net.c b/drivers/net/mhi_net.c index 8800991..b92c2e1 100644 --- a/drivers/net/mhi_net.c +++ b/drivers/net/mhi_net.c @@ -34,11 +34,24 @@ struct mhi_net_dev { struct net_device *ndev; struct sk_buff *skbagg_head; struct sk_buff *skbagg_tail; + const struct mhi_net_proto *proto; + void *proto_data; struct delayed_work rx_refill; struct mhi_net_stats stats; u32 rx_queue_sz; }; +struct mhi_net_proto { + int (*init)(struct mhi_net_dev *mhi_netdev); + struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); + void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); +}; + +struct mhi_device_info { + const char *netname; + const struct mhi_net_proto *proto; +}; + static int mhi_ndo_open(struct net_device *ndev) { struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); @@ -68,26 +81,35 @@ static int mhi_ndo_stop(struct net_device *ndev) static int mhi_ndo_xmit(struct sk_buff *skb, struct net_device *ndev) { struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); + const struct mhi_net_proto *proto = mhi_netdev->proto; struct mhi_device *mdev = mhi_netdev->mdev; int err; + if (proto && proto->tx_fixup) { + skb = proto->tx_fixup(mhi_netdev, skb); + if (unlikely(!skb)) + goto exit_drop; + } + err = mhi_queue_skb(mdev, DMA_TO_DEVICE, skb, skb->len, MHI_EOT); if (unlikely(err)) { net_err_ratelimited("%s: Failed to queue TX buf (%d)\n", ndev->name, err); - - u64_stats_update_begin(&mhi_netdev->stats.tx_syncp); - u64_stats_inc(&mhi_netdev->stats.tx_dropped); - u64_stats_update_end(&mhi_netdev->stats.tx_syncp); - - /* drop the packet */ dev_kfree_skb_any(skb); + goto exit_drop; } if (mhi_queue_is_full(mdev, DMA_TO_DEVICE)) netif_stop_queue(ndev); return NETDEV_TX_OK; + +exit_drop: + u64_stats_update_begin(&mhi_netdev->stats.tx_syncp); + u64_stats_inc(&mhi_netdev->stats.tx_dropped); + u64_stats_update_end(&mhi_netdev->stats.tx_syncp); + + return NETDEV_TX_OK; } static void mhi_ndo_get_stats64(struct net_device *ndev, @@ -164,6 +186,7 @@ static void mhi_net_dl_callback(struct mhi_device *mhi_dev, struct mhi_result *mhi_res) { struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); + const struct mhi_net_proto *proto = mhi_netdev->proto; struct sk_buff *skb = mhi_res->buf_addr; int free_desc_count; @@ -220,7 +243,10 @@ static void mhi_net_dl_callback(struct mhi_device *mhi_dev, break; } - netif_rx(skb); + if (proto && proto->rx) + proto->rx(mhi_netdev, skb); + else + netif_rx(skb); } /* Refill if RX buffers queue becomes low */ @@ -302,14 +328,14 @@ static struct device_type wwan_type = { static int mhi_net_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id) { - const char *netname = (char *)id->driver_data; + const struct mhi_device_info *info = (struct mhi_device_info *)id->driver_data; struct device *dev = &mhi_dev->dev; struct mhi_net_dev *mhi_netdev; struct net_device *ndev; int err; - ndev = alloc_netdev(sizeof(*mhi_netdev), netname, NET_NAME_PREDICTABLE, - mhi_net_setup); + ndev = alloc_netdev(sizeof(*mhi_netdev), info->netname, + NET_NAME_PREDICTABLE, mhi_net_setup); if (!ndev) return -ENOMEM; @@ -318,6 +344,7 @@ static int mhi_net_probe(struct mhi_device *mhi_dev, mhi_netdev->ndev = ndev; mhi_netdev->mdev = mhi_dev; mhi_netdev->skbagg_head = NULL; + mhi_netdev->proto = info->proto; SET_NETDEV_DEV(ndev, &mhi_dev->dev); SET_NETDEV_DEVTYPE(ndev, &wwan_type); @@ -337,8 +364,16 @@ static int mhi_net_probe(struct mhi_device *mhi_dev, if (err) goto out_err; + if (mhi_netdev->proto) { + err = mhi_netdev->proto->init(mhi_netdev); + if (err) + goto out_err_proto; + } + return 0; +out_err_proto: + unregister_netdev(ndev); out_err: free_netdev(ndev); return err; @@ -358,9 +393,19 @@ static void mhi_net_remove(struct mhi_device *mhi_dev) free_netdev(mhi_netdev->ndev); } +static const struct mhi_device_info mhi_hwip0 = { + .netname = "mhi_hwip%d", +}; + +static const struct mhi_device_info mhi_swip0 = { + .netname = "mhi_swip%d", +}; + static const struct mhi_device_id mhi_net_id_table[] = { - { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)"mhi_hwip%d" }, - { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)"mhi_swip%d" }, + /* Hardware accelerated data PATH (to modem IPA), protocol agnostic */ + { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)&mhi_hwip0 }, + /* Software data PATH (to modem CPU) */ + { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)&mhi_swip0 }, {} }; MODULE_DEVICE_TABLE(mhi, mhi_net_id_table); From patchwork Thu Feb 4 18:17:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 376319 Delivered-To: patch@linaro.org Received: by 2002:a17:906:48d2:0:0:0:0 with SMTP id d18csp1512055ejt; Thu, 4 Feb 2021 10:13:34 -0800 (PST) X-Google-Smtp-Source: ABdhPJwVgmiQvMuVjLz5yYd1ks3VaeqEbKIWHYG6NRiDTMMhL2hMEweEsq/nZ6/F7xFLg/jxzONv X-Received: by 2002:a05:6402:c91:: with SMTP id cm17mr199699edb.219.1612462414188; Thu, 04 Feb 2021 10:13:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612462414; cv=none; d=google.com; s=arc-20160816; b=l5xAeN5MafebQCNo8y8QBpW2Iduyj0Z+bx0HLo8K6S9ad0XBHDaHqZ3XOUAji5Puzi b14p3RbtnHPC2BsX6twxtjXsKAho54FhkePQzC8BMxAUys7yTYvRzFb47w94el4bdjMm MZLvLKRlN65L1atvNmI9GVU+enMGoH7wuNAinIKL8NC3ylsM9JBrqGs9XXlyEZxaom6+ 1XbTl215HoloaQevU1IYX30e7580BPUsoueYGWZ8mXJMef1W2rje4BOROZcGaC8atbSr +BNzrWpGBJCWJl3jigWwCL1ojK9OSMGy/r+5+BgFCsuZksJTVwvyLpRqEzBvmo/PQXkD C5nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=cf0LfcfRp3bcgJncg1NSTu2ZamMvFp0Ok+t82TG7INQ=; b=eKzC4GnuPkvs3uVVr9O2rL1wz++u0GxVzRXVZhtqNQMA0hMJ3htQbz+ABaRnuNOLPQ DobekGvjRqf8ewNiOA6p5cGpcf1uFcpzS13EgkJTLASg0Swo3zDTJPFfrppo2iYzRFrV KZPxHSj7nb/52GY7/IjwbG9pgUzgf4UashPZD5QeTlmqWoWixUuYuQq9dG6aafpwRVUp m/+KUZeYMceorQbPcItbDaUiZqgO6CbK+Ys3y0/i3y/Z/oK+YF25FPO/IVk4DrAPSbmp vOJ9KAOisz6PDiee4CoKi38Es/z49+k+pDslbfrVepzPeBLxGqNIrLtxcIwcJq0zxXdS OKOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=xpDSBWeG; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t14si4073659edd.161.2021.02.04.10.13.33; Thu, 04 Feb 2021 10:13:34 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=xpDSBWeG; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238740AbhBDSNF (ORCPT + 7 others); Thu, 4 Feb 2021 13:13:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238849AbhBDSKq (ORCPT ); Thu, 4 Feb 2021 13:10:46 -0500 Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19B22C06178A for ; Thu, 4 Feb 2021 10:10:05 -0800 (PST) Received: by mail-wm1-x32c.google.com with SMTP id l12so3953109wmq.2 for ; Thu, 04 Feb 2021 10:10:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cf0LfcfRp3bcgJncg1NSTu2ZamMvFp0Ok+t82TG7INQ=; b=xpDSBWeGhKh2svv4xStlkDYBDlaMIYmEV7IKA/vSrJipA2vHJFinkZ//S59F9/lPN/ MAeVfUm7Rm2neU36j7r/BBbrP2L818clkgSIXAaCLcQeVc2uvQUKxAejk1cSjLs6kQrC JqWPBhSkB+fuoEspOm8AqQJpNrpxPN33Fyt3iinINNBkk89ypyBJcDwHtMC7BcVS84Mb kz+3Lhfc4NHsfoempMLO1BComdDHwWr+NvAu7qeANIH87VavHylwg/8QFYIk2pnG2BVp y8CT01bGS2wLrMIRm1NPhze3zOChQmiVIXG4aLAxyCsH18s/muwcM+5ZvJ4Z8qieZ691 ureg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cf0LfcfRp3bcgJncg1NSTu2ZamMvFp0Ok+t82TG7INQ=; b=JVTbneudWpbc99QWJ9AnvVgwvTTfZP/0IBO46oiYwLHwXkUWkrYxj4e9e3osKcxJYN eWk4wRV+BQyNTLvxAq700dAsKScaHOvRLHiCWmp8Gw9EJUEE+2j8lmhYACqLgEj1I4uE V6ANskwBN1KhoNpDC2bz1HU40ClVGFt+ouEyftOelxV3sVbtucucEZbW3BBPhRFyUdeH 4kFN9a4WAiHZnAOV4+OKHPtx4z2/waE5G190MMsEfA3ifLAyu6Mc1qYH+E5Gs6SvToX+ PHceZDAg5LgxE2tzErWx5NYwinL7mhySXHEFPH4VPYkT3AhG1HDkUtXoiPka2U6ZuaZJ bFIw== X-Gm-Message-State: AOAM533oA/QBL9PmchMFhNQb0pqNfSalP3gEvgJVQ0fJdc3M6xruNfAG keKu9Vfn+nz/NmZ4W12AKGVwMA== X-Received: by 2002:a7b:c5c1:: with SMTP id n1mr342362wmk.163.1612462203625; Thu, 04 Feb 2021 10:10:03 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id m6sm6313746wmq.13.2021.02.04.10.10.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 10:10:03 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: netdev@vger.kernel.org, bjorn@mork.no, dcbw@redhat.com, carl.yin@quectel.com, mpearson@lenovo.com, cchen50@lenovo.com, jwjiang@lenovo.com, ivan.zhang@quectel.com, naveen.kumar@quectel.com, Loic Poulain Subject: [PATCH net-next v3 2/5] net: mhi: Add dedicated folder Date: Thu, 4 Feb 2021 19:17:38 +0100 Message-Id: <1612462661-23045-3-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> References: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Create a dedicated mhi directory for mhi-net, mhi-net is going to be split into differente files (for additional protocol support). Signed-off-by: Loic Poulain --- drivers/net/Makefile | 2 +- drivers/net/mhi/Makefile | 3 + drivers/net/mhi/net.c | 429 +++++++++++++++++++++++++++++++++++++++++++++++ drivers/net/mhi_net.c | 429 ----------------------------------------------- 4 files changed, 433 insertions(+), 430 deletions(-) create mode 100644 drivers/net/mhi/Makefile create mode 100644 drivers/net/mhi/net.c delete mode 100644 drivers/net/mhi_net.c -- 2.7.4 diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 36e2e41..f4990ff 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -36,7 +36,7 @@ obj-$(CONFIG_GTP) += gtp.o obj-$(CONFIG_NLMON) += nlmon.o obj-$(CONFIG_NET_VRF) += vrf.o obj-$(CONFIG_VSOCKMON) += vsockmon.o -obj-$(CONFIG_MHI_NET) += mhi_net.o +obj-$(CONFIG_MHI_NET) += mhi/ # # Networking Drivers diff --git a/drivers/net/mhi/Makefile b/drivers/net/mhi/Makefile new file mode 100644 index 0000000..0acf989 --- /dev/null +++ b/drivers/net/mhi/Makefile @@ -0,0 +1,3 @@ +obj-$(CONFIG_MHI_NET) += mhi_net.o + +mhi_net-y := net.o diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c new file mode 100644 index 0000000..b92c2e1 --- /dev/null +++ b/drivers/net/mhi/net.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* MHI Network driver - Network over MHI bus + * + * Copyright (C) 2020 Linaro Ltd + */ + +#include +#include +#include +#include +#include +#include +#include + +#define MHI_NET_MIN_MTU ETH_MIN_MTU +#define MHI_NET_MAX_MTU 0xffff +#define MHI_NET_DEFAULT_MTU 0x4000 + +struct mhi_net_stats { + u64_stats_t rx_packets; + u64_stats_t rx_bytes; + u64_stats_t rx_errors; + u64_stats_t rx_dropped; + u64_stats_t tx_packets; + u64_stats_t tx_bytes; + u64_stats_t tx_errors; + u64_stats_t tx_dropped; + struct u64_stats_sync tx_syncp; + struct u64_stats_sync rx_syncp; +}; + +struct mhi_net_dev { + struct mhi_device *mdev; + struct net_device *ndev; + struct sk_buff *skbagg_head; + struct sk_buff *skbagg_tail; + const struct mhi_net_proto *proto; + void *proto_data; + struct delayed_work rx_refill; + struct mhi_net_stats stats; + u32 rx_queue_sz; +}; + +struct mhi_net_proto { + int (*init)(struct mhi_net_dev *mhi_netdev); + struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); + void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); +}; + +struct mhi_device_info { + const char *netname; + const struct mhi_net_proto *proto; +}; + +static int mhi_ndo_open(struct net_device *ndev) +{ + struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); + + /* Feed the rx buffer pool */ + schedule_delayed_work(&mhi_netdev->rx_refill, 0); + + /* Carrier is established via out-of-band channel (e.g. qmi) */ + netif_carrier_on(ndev); + + netif_start_queue(ndev); + + return 0; +} + +static int mhi_ndo_stop(struct net_device *ndev) +{ + struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); + + netif_stop_queue(ndev); + netif_carrier_off(ndev); + cancel_delayed_work_sync(&mhi_netdev->rx_refill); + + return 0; +} + +static int mhi_ndo_xmit(struct sk_buff *skb, struct net_device *ndev) +{ + struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); + const struct mhi_net_proto *proto = mhi_netdev->proto; + struct mhi_device *mdev = mhi_netdev->mdev; + int err; + + if (proto && proto->tx_fixup) { + skb = proto->tx_fixup(mhi_netdev, skb); + if (unlikely(!skb)) + goto exit_drop; + } + + err = mhi_queue_skb(mdev, DMA_TO_DEVICE, skb, skb->len, MHI_EOT); + if (unlikely(err)) { + net_err_ratelimited("%s: Failed to queue TX buf (%d)\n", + ndev->name, err); + dev_kfree_skb_any(skb); + goto exit_drop; + } + + if (mhi_queue_is_full(mdev, DMA_TO_DEVICE)) + netif_stop_queue(ndev); + + return NETDEV_TX_OK; + +exit_drop: + u64_stats_update_begin(&mhi_netdev->stats.tx_syncp); + u64_stats_inc(&mhi_netdev->stats.tx_dropped); + u64_stats_update_end(&mhi_netdev->stats.tx_syncp); + + return NETDEV_TX_OK; +} + +static void mhi_ndo_get_stats64(struct net_device *ndev, + struct rtnl_link_stats64 *stats) +{ + struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); + unsigned int start; + + do { + start = u64_stats_fetch_begin_irq(&mhi_netdev->stats.rx_syncp); + stats->rx_packets = u64_stats_read(&mhi_netdev->stats.rx_packets); + stats->rx_bytes = u64_stats_read(&mhi_netdev->stats.rx_bytes); + stats->rx_errors = u64_stats_read(&mhi_netdev->stats.rx_errors); + stats->rx_dropped = u64_stats_read(&mhi_netdev->stats.rx_dropped); + } while (u64_stats_fetch_retry_irq(&mhi_netdev->stats.rx_syncp, start)); + + do { + start = u64_stats_fetch_begin_irq(&mhi_netdev->stats.tx_syncp); + stats->tx_packets = u64_stats_read(&mhi_netdev->stats.tx_packets); + stats->tx_bytes = u64_stats_read(&mhi_netdev->stats.tx_bytes); + stats->tx_errors = u64_stats_read(&mhi_netdev->stats.tx_errors); + stats->tx_dropped = u64_stats_read(&mhi_netdev->stats.tx_dropped); + } while (u64_stats_fetch_retry_irq(&mhi_netdev->stats.tx_syncp, start)); +} + +static const struct net_device_ops mhi_netdev_ops = { + .ndo_open = mhi_ndo_open, + .ndo_stop = mhi_ndo_stop, + .ndo_start_xmit = mhi_ndo_xmit, + .ndo_get_stats64 = mhi_ndo_get_stats64, +}; + +static void mhi_net_setup(struct net_device *ndev) +{ + ndev->header_ops = NULL; /* No header */ + ndev->type = ARPHRD_RAWIP; + ndev->hard_header_len = 0; + ndev->addr_len = 0; + ndev->flags = IFF_POINTOPOINT | IFF_NOARP; + ndev->netdev_ops = &mhi_netdev_ops; + ndev->mtu = MHI_NET_DEFAULT_MTU; + ndev->min_mtu = MHI_NET_MIN_MTU; + ndev->max_mtu = MHI_NET_MAX_MTU; + ndev->tx_queue_len = 1000; +} + +static struct sk_buff *mhi_net_skb_agg(struct mhi_net_dev *mhi_netdev, + struct sk_buff *skb) +{ + struct sk_buff *head = mhi_netdev->skbagg_head; + struct sk_buff *tail = mhi_netdev->skbagg_tail; + + /* This is non-paged skb chaining using frag_list */ + if (!head) { + mhi_netdev->skbagg_head = skb; + return skb; + } + + if (!skb_shinfo(head)->frag_list) + skb_shinfo(head)->frag_list = skb; + else + tail->next = skb; + + head->len += skb->len; + head->data_len += skb->len; + head->truesize += skb->truesize; + + mhi_netdev->skbagg_tail = skb; + + return mhi_netdev->skbagg_head; +} + +static void mhi_net_dl_callback(struct mhi_device *mhi_dev, + struct mhi_result *mhi_res) +{ + struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); + const struct mhi_net_proto *proto = mhi_netdev->proto; + struct sk_buff *skb = mhi_res->buf_addr; + int free_desc_count; + + free_desc_count = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); + + if (unlikely(mhi_res->transaction_status)) { + switch (mhi_res->transaction_status) { + case -EOVERFLOW: + /* Packet can not fit in one MHI buffer and has been + * split over multiple MHI transfers, do re-aggregation. + * That usually means the device side MTU is larger than + * the host side MTU/MRU. Since this is not optimal, + * print a warning (once). + */ + netdev_warn_once(mhi_netdev->ndev, + "Fragmented packets received, fix MTU?\n"); + skb_put(skb, mhi_res->bytes_xferd); + mhi_net_skb_agg(mhi_netdev, skb); + break; + case -ENOTCONN: + /* MHI layer stopping/resetting the DL channel */ + dev_kfree_skb_any(skb); + return; + default: + /* Unknown error, simply drop */ + dev_kfree_skb_any(skb); + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); + u64_stats_inc(&mhi_netdev->stats.rx_errors); + u64_stats_update_end(&mhi_netdev->stats.rx_syncp); + } + } else { + skb_put(skb, mhi_res->bytes_xferd); + + if (mhi_netdev->skbagg_head) { + /* Aggregate the final fragment */ + skb = mhi_net_skb_agg(mhi_netdev, skb); + mhi_netdev->skbagg_head = NULL; + } + + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); + u64_stats_inc(&mhi_netdev->stats.rx_packets); + u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len); + u64_stats_update_end(&mhi_netdev->stats.rx_syncp); + + switch (skb->data[0] & 0xf0) { + case 0x40: + skb->protocol = htons(ETH_P_IP); + break; + case 0x60: + skb->protocol = htons(ETH_P_IPV6); + break; + default: + skb->protocol = htons(ETH_P_MAP); + break; + } + + if (proto && proto->rx) + proto->rx(mhi_netdev, skb); + else + netif_rx(skb); + } + + /* Refill if RX buffers queue becomes low */ + if (free_desc_count >= mhi_netdev->rx_queue_sz / 2) + schedule_delayed_work(&mhi_netdev->rx_refill, 0); +} + +static void mhi_net_ul_callback(struct mhi_device *mhi_dev, + struct mhi_result *mhi_res) +{ + struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); + struct net_device *ndev = mhi_netdev->ndev; + struct mhi_device *mdev = mhi_netdev->mdev; + struct sk_buff *skb = mhi_res->buf_addr; + + /* Hardware has consumed the buffer, so free the skb (which is not + * freed by the MHI stack) and perform accounting. + */ + dev_consume_skb_any(skb); + + u64_stats_update_begin(&mhi_netdev->stats.tx_syncp); + if (unlikely(mhi_res->transaction_status)) { + + /* MHI layer stopping/resetting the UL channel */ + if (mhi_res->transaction_status == -ENOTCONN) { + u64_stats_update_end(&mhi_netdev->stats.tx_syncp); + return; + } + + u64_stats_inc(&mhi_netdev->stats.tx_errors); + } else { + u64_stats_inc(&mhi_netdev->stats.tx_packets); + u64_stats_add(&mhi_netdev->stats.tx_bytes, mhi_res->bytes_xferd); + } + u64_stats_update_end(&mhi_netdev->stats.tx_syncp); + + if (netif_queue_stopped(ndev) && !mhi_queue_is_full(mdev, DMA_TO_DEVICE)) + netif_wake_queue(ndev); +} + +static void mhi_net_rx_refill_work(struct work_struct *work) +{ + struct mhi_net_dev *mhi_netdev = container_of(work, struct mhi_net_dev, + rx_refill.work); + struct net_device *ndev = mhi_netdev->ndev; + struct mhi_device *mdev = mhi_netdev->mdev; + int size = READ_ONCE(ndev->mtu); + struct sk_buff *skb; + int err; + + while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) { + skb = netdev_alloc_skb(ndev, size); + if (unlikely(!skb)) + break; + + err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb, size, MHI_EOT); + if (unlikely(err)) { + net_err_ratelimited("%s: Failed to queue RX buf (%d)\n", + ndev->name, err); + kfree_skb(skb); + break; + } + + /* Do not hog the CPU if rx buffers are consumed faster than + * queued (unlikely). + */ + cond_resched(); + } + + /* If we're still starved of rx buffers, reschedule later */ + if (mhi_get_free_desc_count(mdev, DMA_FROM_DEVICE) == mhi_netdev->rx_queue_sz) + schedule_delayed_work(&mhi_netdev->rx_refill, HZ / 2); +} + +static struct device_type wwan_type = { + .name = "wwan", +}; + +static int mhi_net_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + const struct mhi_device_info *info = (struct mhi_device_info *)id->driver_data; + struct device *dev = &mhi_dev->dev; + struct mhi_net_dev *mhi_netdev; + struct net_device *ndev; + int err; + + ndev = alloc_netdev(sizeof(*mhi_netdev), info->netname, + NET_NAME_PREDICTABLE, mhi_net_setup); + if (!ndev) + return -ENOMEM; + + mhi_netdev = netdev_priv(ndev); + dev_set_drvdata(dev, mhi_netdev); + mhi_netdev->ndev = ndev; + mhi_netdev->mdev = mhi_dev; + mhi_netdev->skbagg_head = NULL; + mhi_netdev->proto = info->proto; + SET_NETDEV_DEV(ndev, &mhi_dev->dev); + SET_NETDEV_DEVTYPE(ndev, &wwan_type); + + INIT_DELAYED_WORK(&mhi_netdev->rx_refill, mhi_net_rx_refill_work); + u64_stats_init(&mhi_netdev->stats.rx_syncp); + u64_stats_init(&mhi_netdev->stats.tx_syncp); + + /* Start MHI channels */ + err = mhi_prepare_for_transfer(mhi_dev); + if (err) + goto out_err; + + /* Number of transfer descriptors determines size of the queue */ + mhi_netdev->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); + + err = register_netdev(ndev); + if (err) + goto out_err; + + if (mhi_netdev->proto) { + err = mhi_netdev->proto->init(mhi_netdev); + if (err) + goto out_err_proto; + } + + return 0; + +out_err_proto: + unregister_netdev(ndev); +out_err: + free_netdev(ndev); + return err; +} + +static void mhi_net_remove(struct mhi_device *mhi_dev) +{ + struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); + + unregister_netdev(mhi_netdev->ndev); + + mhi_unprepare_from_transfer(mhi_netdev->mdev); + + if (mhi_netdev->skbagg_head) + kfree_skb(mhi_netdev->skbagg_head); + + free_netdev(mhi_netdev->ndev); +} + +static const struct mhi_device_info mhi_hwip0 = { + .netname = "mhi_hwip%d", +}; + +static const struct mhi_device_info mhi_swip0 = { + .netname = "mhi_swip%d", +}; + +static const struct mhi_device_id mhi_net_id_table[] = { + /* Hardware accelerated data PATH (to modem IPA), protocol agnostic */ + { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)&mhi_hwip0 }, + /* Software data PATH (to modem CPU) */ + { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)&mhi_swip0 }, + {} +}; +MODULE_DEVICE_TABLE(mhi, mhi_net_id_table); + +static struct mhi_driver mhi_net_driver = { + .probe = mhi_net_probe, + .remove = mhi_net_remove, + .dl_xfer_cb = mhi_net_dl_callback, + .ul_xfer_cb = mhi_net_ul_callback, + .id_table = mhi_net_id_table, + .driver = { + .name = "mhi_net", + .owner = THIS_MODULE, + }, +}; + +module_mhi_driver(mhi_net_driver); + +MODULE_AUTHOR("Loic Poulain "); +MODULE_DESCRIPTION("Network over MHI"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/net/mhi_net.c b/drivers/net/mhi_net.c deleted file mode 100644 index b92c2e1..0000000 --- a/drivers/net/mhi_net.c +++ /dev/null @@ -1,429 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* MHI Network driver - Network over MHI bus - * - * Copyright (C) 2020 Linaro Ltd - */ - -#include -#include -#include -#include -#include -#include -#include - -#define MHI_NET_MIN_MTU ETH_MIN_MTU -#define MHI_NET_MAX_MTU 0xffff -#define MHI_NET_DEFAULT_MTU 0x4000 - -struct mhi_net_stats { - u64_stats_t rx_packets; - u64_stats_t rx_bytes; - u64_stats_t rx_errors; - u64_stats_t rx_dropped; - u64_stats_t tx_packets; - u64_stats_t tx_bytes; - u64_stats_t tx_errors; - u64_stats_t tx_dropped; - struct u64_stats_sync tx_syncp; - struct u64_stats_sync rx_syncp; -}; - -struct mhi_net_dev { - struct mhi_device *mdev; - struct net_device *ndev; - struct sk_buff *skbagg_head; - struct sk_buff *skbagg_tail; - const struct mhi_net_proto *proto; - void *proto_data; - struct delayed_work rx_refill; - struct mhi_net_stats stats; - u32 rx_queue_sz; -}; - -struct mhi_net_proto { - int (*init)(struct mhi_net_dev *mhi_netdev); - struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); - void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); -}; - -struct mhi_device_info { - const char *netname; - const struct mhi_net_proto *proto; -}; - -static int mhi_ndo_open(struct net_device *ndev) -{ - struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); - - /* Feed the rx buffer pool */ - schedule_delayed_work(&mhi_netdev->rx_refill, 0); - - /* Carrier is established via out-of-band channel (e.g. qmi) */ - netif_carrier_on(ndev); - - netif_start_queue(ndev); - - return 0; -} - -static int mhi_ndo_stop(struct net_device *ndev) -{ - struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); - - netif_stop_queue(ndev); - netif_carrier_off(ndev); - cancel_delayed_work_sync(&mhi_netdev->rx_refill); - - return 0; -} - -static int mhi_ndo_xmit(struct sk_buff *skb, struct net_device *ndev) -{ - struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); - const struct mhi_net_proto *proto = mhi_netdev->proto; - struct mhi_device *mdev = mhi_netdev->mdev; - int err; - - if (proto && proto->tx_fixup) { - skb = proto->tx_fixup(mhi_netdev, skb); - if (unlikely(!skb)) - goto exit_drop; - } - - err = mhi_queue_skb(mdev, DMA_TO_DEVICE, skb, skb->len, MHI_EOT); - if (unlikely(err)) { - net_err_ratelimited("%s: Failed to queue TX buf (%d)\n", - ndev->name, err); - dev_kfree_skb_any(skb); - goto exit_drop; - } - - if (mhi_queue_is_full(mdev, DMA_TO_DEVICE)) - netif_stop_queue(ndev); - - return NETDEV_TX_OK; - -exit_drop: - u64_stats_update_begin(&mhi_netdev->stats.tx_syncp); - u64_stats_inc(&mhi_netdev->stats.tx_dropped); - u64_stats_update_end(&mhi_netdev->stats.tx_syncp); - - return NETDEV_TX_OK; -} - -static void mhi_ndo_get_stats64(struct net_device *ndev, - struct rtnl_link_stats64 *stats) -{ - struct mhi_net_dev *mhi_netdev = netdev_priv(ndev); - unsigned int start; - - do { - start = u64_stats_fetch_begin_irq(&mhi_netdev->stats.rx_syncp); - stats->rx_packets = u64_stats_read(&mhi_netdev->stats.rx_packets); - stats->rx_bytes = u64_stats_read(&mhi_netdev->stats.rx_bytes); - stats->rx_errors = u64_stats_read(&mhi_netdev->stats.rx_errors); - stats->rx_dropped = u64_stats_read(&mhi_netdev->stats.rx_dropped); - } while (u64_stats_fetch_retry_irq(&mhi_netdev->stats.rx_syncp, start)); - - do { - start = u64_stats_fetch_begin_irq(&mhi_netdev->stats.tx_syncp); - stats->tx_packets = u64_stats_read(&mhi_netdev->stats.tx_packets); - stats->tx_bytes = u64_stats_read(&mhi_netdev->stats.tx_bytes); - stats->tx_errors = u64_stats_read(&mhi_netdev->stats.tx_errors); - stats->tx_dropped = u64_stats_read(&mhi_netdev->stats.tx_dropped); - } while (u64_stats_fetch_retry_irq(&mhi_netdev->stats.tx_syncp, start)); -} - -static const struct net_device_ops mhi_netdev_ops = { - .ndo_open = mhi_ndo_open, - .ndo_stop = mhi_ndo_stop, - .ndo_start_xmit = mhi_ndo_xmit, - .ndo_get_stats64 = mhi_ndo_get_stats64, -}; - -static void mhi_net_setup(struct net_device *ndev) -{ - ndev->header_ops = NULL; /* No header */ - ndev->type = ARPHRD_RAWIP; - ndev->hard_header_len = 0; - ndev->addr_len = 0; - ndev->flags = IFF_POINTOPOINT | IFF_NOARP; - ndev->netdev_ops = &mhi_netdev_ops; - ndev->mtu = MHI_NET_DEFAULT_MTU; - ndev->min_mtu = MHI_NET_MIN_MTU; - ndev->max_mtu = MHI_NET_MAX_MTU; - ndev->tx_queue_len = 1000; -} - -static struct sk_buff *mhi_net_skb_agg(struct mhi_net_dev *mhi_netdev, - struct sk_buff *skb) -{ - struct sk_buff *head = mhi_netdev->skbagg_head; - struct sk_buff *tail = mhi_netdev->skbagg_tail; - - /* This is non-paged skb chaining using frag_list */ - if (!head) { - mhi_netdev->skbagg_head = skb; - return skb; - } - - if (!skb_shinfo(head)->frag_list) - skb_shinfo(head)->frag_list = skb; - else - tail->next = skb; - - head->len += skb->len; - head->data_len += skb->len; - head->truesize += skb->truesize; - - mhi_netdev->skbagg_tail = skb; - - return mhi_netdev->skbagg_head; -} - -static void mhi_net_dl_callback(struct mhi_device *mhi_dev, - struct mhi_result *mhi_res) -{ - struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); - const struct mhi_net_proto *proto = mhi_netdev->proto; - struct sk_buff *skb = mhi_res->buf_addr; - int free_desc_count; - - free_desc_count = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); - - if (unlikely(mhi_res->transaction_status)) { - switch (mhi_res->transaction_status) { - case -EOVERFLOW: - /* Packet can not fit in one MHI buffer and has been - * split over multiple MHI transfers, do re-aggregation. - * That usually means the device side MTU is larger than - * the host side MTU/MRU. Since this is not optimal, - * print a warning (once). - */ - netdev_warn_once(mhi_netdev->ndev, - "Fragmented packets received, fix MTU?\n"); - skb_put(skb, mhi_res->bytes_xferd); - mhi_net_skb_agg(mhi_netdev, skb); - break; - case -ENOTCONN: - /* MHI layer stopping/resetting the DL channel */ - dev_kfree_skb_any(skb); - return; - default: - /* Unknown error, simply drop */ - dev_kfree_skb_any(skb); - u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); - u64_stats_inc(&mhi_netdev->stats.rx_errors); - u64_stats_update_end(&mhi_netdev->stats.rx_syncp); - } - } else { - skb_put(skb, mhi_res->bytes_xferd); - - if (mhi_netdev->skbagg_head) { - /* Aggregate the final fragment */ - skb = mhi_net_skb_agg(mhi_netdev, skb); - mhi_netdev->skbagg_head = NULL; - } - - u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); - u64_stats_inc(&mhi_netdev->stats.rx_packets); - u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len); - u64_stats_update_end(&mhi_netdev->stats.rx_syncp); - - switch (skb->data[0] & 0xf0) { - case 0x40: - skb->protocol = htons(ETH_P_IP); - break; - case 0x60: - skb->protocol = htons(ETH_P_IPV6); - break; - default: - skb->protocol = htons(ETH_P_MAP); - break; - } - - if (proto && proto->rx) - proto->rx(mhi_netdev, skb); - else - netif_rx(skb); - } - - /* Refill if RX buffers queue becomes low */ - if (free_desc_count >= mhi_netdev->rx_queue_sz / 2) - schedule_delayed_work(&mhi_netdev->rx_refill, 0); -} - -static void mhi_net_ul_callback(struct mhi_device *mhi_dev, - struct mhi_result *mhi_res) -{ - struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); - struct net_device *ndev = mhi_netdev->ndev; - struct mhi_device *mdev = mhi_netdev->mdev; - struct sk_buff *skb = mhi_res->buf_addr; - - /* Hardware has consumed the buffer, so free the skb (which is not - * freed by the MHI stack) and perform accounting. - */ - dev_consume_skb_any(skb); - - u64_stats_update_begin(&mhi_netdev->stats.tx_syncp); - if (unlikely(mhi_res->transaction_status)) { - - /* MHI layer stopping/resetting the UL channel */ - if (mhi_res->transaction_status == -ENOTCONN) { - u64_stats_update_end(&mhi_netdev->stats.tx_syncp); - return; - } - - u64_stats_inc(&mhi_netdev->stats.tx_errors); - } else { - u64_stats_inc(&mhi_netdev->stats.tx_packets); - u64_stats_add(&mhi_netdev->stats.tx_bytes, mhi_res->bytes_xferd); - } - u64_stats_update_end(&mhi_netdev->stats.tx_syncp); - - if (netif_queue_stopped(ndev) && !mhi_queue_is_full(mdev, DMA_TO_DEVICE)) - netif_wake_queue(ndev); -} - -static void mhi_net_rx_refill_work(struct work_struct *work) -{ - struct mhi_net_dev *mhi_netdev = container_of(work, struct mhi_net_dev, - rx_refill.work); - struct net_device *ndev = mhi_netdev->ndev; - struct mhi_device *mdev = mhi_netdev->mdev; - int size = READ_ONCE(ndev->mtu); - struct sk_buff *skb; - int err; - - while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) { - skb = netdev_alloc_skb(ndev, size); - if (unlikely(!skb)) - break; - - err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb, size, MHI_EOT); - if (unlikely(err)) { - net_err_ratelimited("%s: Failed to queue RX buf (%d)\n", - ndev->name, err); - kfree_skb(skb); - break; - } - - /* Do not hog the CPU if rx buffers are consumed faster than - * queued (unlikely). - */ - cond_resched(); - } - - /* If we're still starved of rx buffers, reschedule later */ - if (mhi_get_free_desc_count(mdev, DMA_FROM_DEVICE) == mhi_netdev->rx_queue_sz) - schedule_delayed_work(&mhi_netdev->rx_refill, HZ / 2); -} - -static struct device_type wwan_type = { - .name = "wwan", -}; - -static int mhi_net_probe(struct mhi_device *mhi_dev, - const struct mhi_device_id *id) -{ - const struct mhi_device_info *info = (struct mhi_device_info *)id->driver_data; - struct device *dev = &mhi_dev->dev; - struct mhi_net_dev *mhi_netdev; - struct net_device *ndev; - int err; - - ndev = alloc_netdev(sizeof(*mhi_netdev), info->netname, - NET_NAME_PREDICTABLE, mhi_net_setup); - if (!ndev) - return -ENOMEM; - - mhi_netdev = netdev_priv(ndev); - dev_set_drvdata(dev, mhi_netdev); - mhi_netdev->ndev = ndev; - mhi_netdev->mdev = mhi_dev; - mhi_netdev->skbagg_head = NULL; - mhi_netdev->proto = info->proto; - SET_NETDEV_DEV(ndev, &mhi_dev->dev); - SET_NETDEV_DEVTYPE(ndev, &wwan_type); - - INIT_DELAYED_WORK(&mhi_netdev->rx_refill, mhi_net_rx_refill_work); - u64_stats_init(&mhi_netdev->stats.rx_syncp); - u64_stats_init(&mhi_netdev->stats.tx_syncp); - - /* Start MHI channels */ - err = mhi_prepare_for_transfer(mhi_dev); - if (err) - goto out_err; - - /* Number of transfer descriptors determines size of the queue */ - mhi_netdev->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); - - err = register_netdev(ndev); - if (err) - goto out_err; - - if (mhi_netdev->proto) { - err = mhi_netdev->proto->init(mhi_netdev); - if (err) - goto out_err_proto; - } - - return 0; - -out_err_proto: - unregister_netdev(ndev); -out_err: - free_netdev(ndev); - return err; -} - -static void mhi_net_remove(struct mhi_device *mhi_dev) -{ - struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev); - - unregister_netdev(mhi_netdev->ndev); - - mhi_unprepare_from_transfer(mhi_netdev->mdev); - - if (mhi_netdev->skbagg_head) - kfree_skb(mhi_netdev->skbagg_head); - - free_netdev(mhi_netdev->ndev); -} - -static const struct mhi_device_info mhi_hwip0 = { - .netname = "mhi_hwip%d", -}; - -static const struct mhi_device_info mhi_swip0 = { - .netname = "mhi_swip%d", -}; - -static const struct mhi_device_id mhi_net_id_table[] = { - /* Hardware accelerated data PATH (to modem IPA), protocol agnostic */ - { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)&mhi_hwip0 }, - /* Software data PATH (to modem CPU) */ - { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)&mhi_swip0 }, - {} -}; -MODULE_DEVICE_TABLE(mhi, mhi_net_id_table); - -static struct mhi_driver mhi_net_driver = { - .probe = mhi_net_probe, - .remove = mhi_net_remove, - .dl_xfer_cb = mhi_net_dl_callback, - .ul_xfer_cb = mhi_net_ul_callback, - .id_table = mhi_net_id_table, - .driver = { - .name = "mhi_net", - .owner = THIS_MODULE, - }, -}; - -module_mhi_driver(mhi_net_driver); - -MODULE_AUTHOR("Loic Poulain "); -MODULE_DESCRIPTION("Network over MHI"); -MODULE_LICENSE("GPL v2"); From patchwork Thu Feb 4 18:17:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 376318 Delivered-To: patch@linaro.org Received: by 2002:a17:906:48d2:0:0:0:0 with SMTP id d18csp1512031ejt; Thu, 4 Feb 2021 10:13:32 -0800 (PST) X-Google-Smtp-Source: ABdhPJyI72TXf8yuKr0RTDxQyW+tsSOXauvyPpYE6XUSlnDw5HpTnG7/2OQeHp2SvytVqh2v0Ddz X-Received: by 2002:a17:906:c08a:: with SMTP id f10mr383772ejz.52.1612462412580; Thu, 04 Feb 2021 10:13:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612462412; cv=none; d=google.com; s=arc-20160816; b=lG5EGq7aewRie0dPlZDIdMKnhCeX7Y61E2hyUR0xq+Q/jF3dg0+61v1myYfPJsP5N4 GX780qJSO75OWxAvXQv9tVhY82gG5HrR3gFIp0wSy+EnqPD1IokdQ1uPMqIernW2A3ud pyYEBjhFf7XYxJNRE5WvYcSvbYBmq6QrS7RIfgBg99JhisOSJfiPaKWGn3mCn8hlvFYe AeqoyrvmxpBGJiSS+Lu8q8bItPK+nsuskK8iAoUFWppIJTDBexns1rKUVsApKRtusuOM mMlR5Tab1yqKrE6JAgdixFDt0HOKNtmRuxaYYpVVE0Y5inewWI93qXqL1x4wj5c4RZ1v r29Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=nUxAXdLjhkJNUqiBcTNdD1orykfV5lPBWV6f7eDBEEk=; b=IJs0A19agSRZhad1qz8COn4H//er49FOuQH3Ee5iHdFiJNvFjBoeOw39Upj5eCiX6B dU/quVkZ/Mb05daYIvV/jdDzCYwrxWKndyWg5qY7NkEVTSPTFCYIO1ee7ueS/WQcvsEq kkXZTuVyTkqi0XjJ1qDMMtNtcBYAvsd+8J2hXxJAu8VjGij9RwzVq3/m7BOtnSVw1YcL vq+HRWOvOXoGyaOkSQI4Ad9qxoqSUz3DYeB2/FE6/xWN3yTBbqtAxIgsdLjR4ssQlVRd UF4mStZNm/2ioNfXPKiUKJHx0VGm0VWqQtwtskYRl2La7UdaJ0gFqzi+Zv6l7QSOhvoJ oPFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="YS/hhQgn"; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t14si4073659edd.161.2021.02.04.10.13.31; Thu, 04 Feb 2021 10:13:32 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="YS/hhQgn"; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238969AbhBDSMn (ORCPT + 7 others); Thu, 4 Feb 2021 13:12:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238854AbhBDSKq (ORCPT ); Thu, 4 Feb 2021 13:10:46 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7996FC06178B for ; Thu, 4 Feb 2021 10:10:06 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id d16so4601109wro.11 for ; Thu, 04 Feb 2021 10:10:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nUxAXdLjhkJNUqiBcTNdD1orykfV5lPBWV6f7eDBEEk=; b=YS/hhQgn/ccTu+txLHLst5dnQ2u4qrZwkIRV2iqHghfJGHkoNMxuMBncBrxJXp3eYX z9F5USu7xrFfOwEDMdVfgUuZwCHmqpsvVVRDL8I3+ywXecWS+8zZqpAxJVDRlp/LasPG 6OqvtX3/rYNkliAwfMHk5Hzgv9vfMZK1Fn0EqDC9a5/OSh31kVJJultFC1V3GWI9I6zn EOgmJ4lm/PXzCwlBo5odeeHdr855+zt4j6gFUyFJpA45frlX0cYeVMef5HxAWwK2eBpP IPyx7WYv+l90mMT8ECV/oiymMYGhIhh6w0eyMZ89DR5bgo5h3hh6Orqj7iUvIuSZsKzN jcZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nUxAXdLjhkJNUqiBcTNdD1orykfV5lPBWV6f7eDBEEk=; b=qKD6SGRvmPfbirFZaDpAKywL/7J5R/RS7s/hl6wqEozYbb5U1yVKsuWIAHpeYzMoQ7 f/iJijCa6nNiW3jm3bPyyekL8LKHZPGftKlhcKEFeYJbcCkNacCGrIXUtiS68AoEpTeL +ws19zuKQjaTxK0f7T5CXB8vg69awTE3ZVe59xNDGq1dM/eemhWk+2fDTxoUnWL2/pQb kbe44co1VrtpeomzwJOH9EPm94kbvSuwUZvAl724vT0CbxXHpr0SkJ0dMOx0oFbGVzJf jc8+EcXMoaXrh7KQjBj+PiYURTy6YFMrYnDqwev3kejAA3cgFlt1f1h7xXabgC0ASCYQ CmVg== X-Gm-Message-State: AOAM532UzF0FcSbVL3/56vUZ1ZJTIFHmG2zSn82hdn43I/ruLsIrH8Lv 6WTXFxc6wsw431h5u4q2AFnpeA== X-Received: by 2002:adf:ce89:: with SMTP id r9mr595461wrn.345.1612462205253; Thu, 04 Feb 2021 10:10:05 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id m6sm6313746wmq.13.2021.02.04.10.10.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 10:10:04 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: netdev@vger.kernel.org, bjorn@mork.no, dcbw@redhat.com, carl.yin@quectel.com, mpearson@lenovo.com, cchen50@lenovo.com, jwjiang@lenovo.com, ivan.zhang@quectel.com, naveen.kumar@quectel.com, Loic Poulain Subject: [PATCH net-next v3 3/5] net: mhi: Create mhi.h Date: Thu, 4 Feb 2021 19:17:39 +0100 Message-Id: <1612462661-23045-4-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> References: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Move mhi-net shared structures to mhi header, that will be used by upcoming proto(s). Signed-off-by: Loic Poulain --- drivers/net/mhi/mhi.h | 36 ++++++++++++++++++++++++++++++++++++ drivers/net/mhi/net.c | 33 ++------------------------------- 2 files changed, 38 insertions(+), 31 deletions(-) create mode 100644 drivers/net/mhi/mhi.h -- 2.7.4 diff --git a/drivers/net/mhi/mhi.h b/drivers/net/mhi/mhi.h new file mode 100644 index 0000000..5050e4a --- /dev/null +++ b/drivers/net/mhi/mhi.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* MHI Network driver - Network over MHI bus + * + * Copyright (C) 2021 Linaro Ltd + */ + +struct mhi_net_stats { + u64_stats_t rx_packets; + u64_stats_t rx_bytes; + u64_stats_t rx_errors; + u64_stats_t rx_dropped; + u64_stats_t tx_packets; + u64_stats_t tx_bytes; + u64_stats_t tx_errors; + u64_stats_t tx_dropped; + struct u64_stats_sync tx_syncp; + struct u64_stats_sync rx_syncp; +}; + +struct mhi_net_dev { + struct mhi_device *mdev; + struct net_device *ndev; + struct sk_buff *skbagg_head; + struct sk_buff *skbagg_tail; + const struct mhi_net_proto *proto; + void *proto_data; + struct delayed_work rx_refill; + struct mhi_net_stats stats; + u32 rx_queue_sz; +}; + +struct mhi_net_proto { + int (*init)(struct mhi_net_dev *mhi_netdev); + struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); + void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); +}; diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c index b92c2e1..58b4b7c 100644 --- a/drivers/net/mhi/net.c +++ b/drivers/net/mhi/net.c @@ -12,41 +12,12 @@ #include #include +#include "mhi.h" + #define MHI_NET_MIN_MTU ETH_MIN_MTU #define MHI_NET_MAX_MTU 0xffff #define MHI_NET_DEFAULT_MTU 0x4000 -struct mhi_net_stats { - u64_stats_t rx_packets; - u64_stats_t rx_bytes; - u64_stats_t rx_errors; - u64_stats_t rx_dropped; - u64_stats_t tx_packets; - u64_stats_t tx_bytes; - u64_stats_t tx_errors; - u64_stats_t tx_dropped; - struct u64_stats_sync tx_syncp; - struct u64_stats_sync rx_syncp; -}; - -struct mhi_net_dev { - struct mhi_device *mdev; - struct net_device *ndev; - struct sk_buff *skbagg_head; - struct sk_buff *skbagg_tail; - const struct mhi_net_proto *proto; - void *proto_data; - struct delayed_work rx_refill; - struct mhi_net_stats stats; - u32 rx_queue_sz; -}; - -struct mhi_net_proto { - int (*init)(struct mhi_net_dev *mhi_netdev); - struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); - void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); -}; - struct mhi_device_info { const char *netname; const struct mhi_net_proto *proto; From patchwork Thu Feb 4 18:17:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 376317 Delivered-To: patch@linaro.org Received: by 2002:a17:906:48d2:0:0:0:0 with SMTP id d18csp1510762ejt; Thu, 4 Feb 2021 10:11:58 -0800 (PST) X-Google-Smtp-Source: ABdhPJyCdfFz/v9dsMY0iY6hpdlK/JIOftj7b0qwljJHWMr/XRD5dIyIxlWH8PpRqXJF2B7HJqOf X-Received: by 2002:aa7:c58e:: with SMTP id g14mr225084edq.318.1612462318665; Thu, 04 Feb 2021 10:11:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612462318; cv=none; d=google.com; s=arc-20160816; b=eOAbOQfE7YZu2qrCkqnQVPWGg+P1bKJPKWI/VVFGod7H228eKdshbomCkatAQ8CFTR RpiPkEGL6Ir/AG98L9LH1yROaQ6crU3TUGPzH62L4gbWUfUiiuANd/obRUvYnF3SS8F/ CfRHBNkePD0bTGsCsxakyGuHQWjYqYUL27l4vZYAfxpHFG7DYPVf8qHBFodilX00KKXZ 7YzxJ6ucipBHoZEtkAeEoLzYTHTK4BtVXTcPRdP7EpKHFV3J+79t52lYnFujN9F9Et9f DYfPnKuw28AXG1MTf563rxVfabTUk+M5ubSfuhIhstJP6FtmmvYm5mW7yJR5IdSaaI12 r4dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=d/vcUFElLAuaSN3VBWBjg1pNshUOyxf0nCgvwWAXx70=; b=dS9uDZEy7iGQTb9cTDPZ0eT4RN05I5M+e0Rswfwa/iWGels/UGa7xvdpSNa1pC+lN3 9mw7dJyKw985/USEncmYBOgb8sT5w26X8QwrIOOp5/RGcA7ZOTxbE5rTIX3vkzaOxJ// vJzKhakHitSoPAIPq4kc8xVFdSgzqJMzgpH0eSp9ky6AKW71JQSmOHW3Id3ewlipA3iw mtXvhfUyQBScLGchgO8ZF37hoY/omqgoLlOwQzj9SzIbk8dbdYysjXpmYQuuTu8Euh3V Gd8bc5oxanOo1iZdFnKx5r/0rwh0urY4VTnpiYcGOYHyP8R8Av+xAaxVuUMagbxjUfuR 3qwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jLzThKil; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r21si4142874ejo.142.2021.02.04.10.11.57; Thu, 04 Feb 2021 10:11:58 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jLzThKil; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238884AbhBDSLZ (ORCPT + 7 others); Thu, 4 Feb 2021 13:11:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237870AbhBDSKt (ORCPT ); Thu, 4 Feb 2021 13:10:49 -0500 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB7DCC06178C for ; Thu, 4 Feb 2021 10:10:07 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id m1so3819687wml.2 for ; Thu, 04 Feb 2021 10:10:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=d/vcUFElLAuaSN3VBWBjg1pNshUOyxf0nCgvwWAXx70=; b=jLzThKild8SiSmDTt7B6hMXltpLE82ZUBpsXNL8Qx8a03uuOb0of67FQmkMxHvCy2N sX1F1uD+q0b79+hO4b6q9KdSclrWge+E0U0gewA2xbRgjs+iBR6rjeuCam/ZXAQrKfcC GHUSafEyirZ4cNjkPs0OvQrg4XLYIQH0aRlHVgJt1zTZ7CK1xU+FbdHX/URJIYkeOBc/ N3hEOc6gpcgqpiTFdM8YWUQMXdjBHQY2//N/cR+bjXBQWSUIOC6jKrUmeYOGcWfuER+W o4LTb2PuZBBDS5XB9Rt36bzcPC5osuqxu5KJBiyNZXqlIm2ZUzkW8iG+GydteqAVGdIq n+jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=d/vcUFElLAuaSN3VBWBjg1pNshUOyxf0nCgvwWAXx70=; b=T93FYzyFAl8JPlHiKBV9comsuo09Ia+26gLGzQuFs2cEah1hYshg0hZQmRsS4a141y JD+QrHqa+CUu/HvU8FnHu4UcGZcixEefA5ELbYN8vw3UGCWwMT/wf5O868GW5vGU+yEn qBn/xyIaKe0RXsQ4DhwiEB8Uz9MJylSHIJhnAxArAoHOs6OQMpQK4coZHmW4yZZB+U6t tRKmUaBxoVaId07CqjXiFlqeBpX5W07xJtb6oFIC3nwH3w9yBgRFfHoUy0XeSp34SM+V YpqdGuvgdDkBeZn0Ci+rFLXEJiAIJ0VoZo9eyM34NgDmGj2buhOKtCOMWljZuwyV36eX ql0Q== X-Gm-Message-State: AOAM533lVU0AmiXM7VlNmDlmHca3zqKxGD5N1xlTC4wwrtDHw6uzEMiB MB78zCKSVS4AHYXIvwuEJpc0vQ== X-Received: by 2002:a1c:65d5:: with SMTP id z204mr372600wmb.184.1612462206510; Thu, 04 Feb 2021 10:10:06 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id m6sm6313746wmq.13.2021.02.04.10.10.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 10:10:06 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: netdev@vger.kernel.org, bjorn@mork.no, dcbw@redhat.com, carl.yin@quectel.com, mpearson@lenovo.com, cchen50@lenovo.com, jwjiang@lenovo.com, ivan.zhang@quectel.com, naveen.kumar@quectel.com, Loic Poulain Subject: [PATCH net-next v3 4/5] net: mhi: Add rx_length_errors stat Date: Thu, 4 Feb 2021 19:17:40 +0100 Message-Id: <1612462661-23045-5-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> References: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This can be used by proto when packet len is incorrect. Signed-off-by: Loic Poulain --- drivers/net/mhi/mhi.h | 1 + drivers/net/mhi/net.c | 1 + 2 files changed, 2 insertions(+) -- 2.7.4 diff --git a/drivers/net/mhi/mhi.h b/drivers/net/mhi/mhi.h index 5050e4a..82210e0 100644 --- a/drivers/net/mhi/mhi.h +++ b/drivers/net/mhi/mhi.h @@ -9,6 +9,7 @@ struct mhi_net_stats { u64_stats_t rx_bytes; u64_stats_t rx_errors; u64_stats_t rx_dropped; + u64_stats_t rx_length_errors; u64_stats_t tx_packets; u64_stats_t tx_bytes; u64_stats_t tx_errors; diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c index 58b4b7c..44cbfb3 100644 --- a/drivers/net/mhi/net.c +++ b/drivers/net/mhi/net.c @@ -95,6 +95,7 @@ static void mhi_ndo_get_stats64(struct net_device *ndev, stats->rx_bytes = u64_stats_read(&mhi_netdev->stats.rx_bytes); stats->rx_errors = u64_stats_read(&mhi_netdev->stats.rx_errors); stats->rx_dropped = u64_stats_read(&mhi_netdev->stats.rx_dropped); + stats->rx_length_errors = u64_stats_read(&mhi_netdev->stats.rx_length_errors); } while (u64_stats_fetch_retry_irq(&mhi_netdev->stats.rx_syncp, start)); do { From patchwork Thu Feb 4 18:17:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 376320 Delivered-To: patch@linaro.org Received: by 2002:a17:906:48d2:0:0:0:0 with SMTP id d18csp1513177ejt; Thu, 4 Feb 2021 10:15:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJxtKEPEwHDudvpYIUmGwjqUfaUPtS4whICPMcsGmNtPgRwv4NZf821ZLK7dcaNQ4mv8+pkL X-Received: by 2002:a17:906:154d:: with SMTP id c13mr326738ejd.471.1612462505660; Thu, 04 Feb 2021 10:15:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612462505; cv=none; d=google.com; s=arc-20160816; b=SY6T2s4inAbQSQRLvKhwV7VMT3aLfLKSRmkcID8fdStjwGidcG4nlGgCs7Gco2qAl6 yNhdPkOXDe1CjkfHNuMFLDuH0SRqv9R5N0FrUY7eHwJRY2q00wYRlp0pFTtC0jl5fN2M y2VaSJQ7U0RvmKUyT07nIPAVkLAqYcERry6ksGCSBekLSvnsexHwHsgWIiR4CSZfefSa nfuP0Xp2Jp7QPbPBaYgwM0m8uGy1+hrBGWvP8ZaE+pqi1BZo7dlAa3QvocqTmVXC9guw mFSbHLQp/DNBNgGK/Wd63SJlNm+q4cPIfiAtxR103Om3ZNaeT7SvRk0hzsZBP3f2bnjB GvGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gdaPiANd0cSEKoH2tSdl2E8XzRPl1GxXSUo/TweROCs=; b=vULA0X7tRrcHAwdz9NVQGSWerqGw5oSdZ7G4Z748dCsIWtQpx2u8HySpbfbdLYSlRh vlh+oBPc8JFljvpN9q/r1vXkQ3f8icjiYbZ1bfUxmx5BwOOvEDX8RnlOF3DaoiTXyEIZ NmgsggJXPG4G2cPKCtjWDqp104SB8ClGDe3Yz/POJIgRxRfF28WseMirYXTwbqHQrxRB PItZZ4EBh1A4t20B23d++GAJ/hbXcMrMppgWzkQO0k8BbvbSyoAwvSDTywDikV9P3Kry OGYMLNf2MdQRpjYwvuxockUYari+SAXoO9DsgVNROZlKxsQ8IuThcBN5QcERMjFt232G 1WEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=epQfXM5s; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y18si3580708eds.570.2021.02.04.10.15.05; Thu, 04 Feb 2021 10:15:05 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=epQfXM5s; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238991AbhBDSNg (ORCPT + 7 others); Thu, 4 Feb 2021 13:13:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238029AbhBDSKt (ORCPT ); Thu, 4 Feb 2021 13:10:49 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 429CCC0613D6 for ; Thu, 4 Feb 2021 10:10:09 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id p15so4614077wrq.8 for ; Thu, 04 Feb 2021 10:10:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gdaPiANd0cSEKoH2tSdl2E8XzRPl1GxXSUo/TweROCs=; b=epQfXM5syu1aiDyzaA2HFpPYDaEQwgIE+/g2P3pjgPAk4HX08cYCYS1TKXbNpsfTBe VLJmcI72AWnmU/+5IzClN8wL5skakekcSl5L/+eJP3Q0f+7ghcexcrL0Y18ApuyOi003 IEhRWeSVy/9uBGrrF+InlkFJe63z5bBEYg8+WLljp0v8UoKiEq8+GRXZAKDKkWW1cYd+ hKP8c0Rit6yxxIRoTXtMO/AMehXPdPJI6Q0i7gM9NvQSlkmXOBBOsLGU259pSVfKAc2X BBOPe0JO6U3Fe5+6A2C4xRvA+//Ne66ii0dWFAz6ASI2gkQwnQXuPnWUIHnO6nftQ+tn r7ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gdaPiANd0cSEKoH2tSdl2E8XzRPl1GxXSUo/TweROCs=; b=E+0Lma+SRXWZjR7Z0p0jykwlQ5m4zrFz/toPYFENnq77Ta+HNuI6h7c0nxbQtQ9AW6 aaMwqkXGCZePf5+RGNRsrco+Bo7Z3VOc0n8T/KIL5CImVIx0SF9vUPLsETsbzqIAIClQ SgVjeQdhGOSp1gjGjQ7zKsINmmwD+uVrXZsWo69k7a4B3zWjdSh/Sz/X03p01URJ7JnL A1mTXHcpgN6FDHl/mglYWaOl8ri/0y2xJdZ6wATmNuOSc7rTjcoZuixzY7WNLSc/9BX8 /SpLyglEjlm2Sr+AOnyOos5n++MUPiKVk0/9EqvM+oZdG70Ws8Nz2dLne1ORWCblw2lM /QWw== X-Gm-Message-State: AOAM533XNyZGJwJ+I82OgXTac9apJGAfKPWZUIeWQL9bl13K6zuNXBiM SAM/KYAA1oEc21FVowr7aiBWyg== X-Received: by 2002:adf:dd0d:: with SMTP id a13mr613416wrm.143.1612462207918; Thu, 04 Feb 2021 10:10:07 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id m6sm6313746wmq.13.2021.02.04.10.10.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 10:10:07 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: netdev@vger.kernel.org, bjorn@mork.no, dcbw@redhat.com, carl.yin@quectel.com, mpearson@lenovo.com, cchen50@lenovo.com, jwjiang@lenovo.com, ivan.zhang@quectel.com, naveen.kumar@quectel.com, Loic Poulain Subject: [PATCH net-next v3 5/5] net: mhi: Add mbim proto Date: Thu, 4 Feb 2021 19:17:41 +0100 Message-Id: <1612462661-23045-6-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> References: <1612462661-23045-1-git-send-email-loic.poulain@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org MBIM has initially been specified by USB-IF for transporting data (IP) between a modem and a host over USB. However some modern modems also support MBIM over PCIe (via MHI). In the same way as QMAP(rmnet), it allows to aggregate IP packets and to perform context multiplexing. This change adds minimal MBIM data transport support to MHI, allowing to support MBIM only modems. MBIM being based on USB NCM, it reuses and copy some helpers/functions from the USB stack (cdc-ncm, cdc-mbim). Note that is a subset of the CDC-MBIM specification, supporting only transport of network data (IP), there is no support for DSS. Moreover the multi-session (for multi-pdn) is not supported in this initial version, but will be added latter, and aligned with the cdc-mbim solution (VLAN tags). This code has been inspired from the mhi_mbim downstream implementation (Carl Yin ). Signed-off-by: Loic Poulain --- drivers/net/mhi/Makefile | 2 +- drivers/net/mhi/mhi.h | 3 + drivers/net/mhi/net.c | 7 ++ drivers/net/mhi/proto_mbim.c | 294 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 305 insertions(+), 1 deletion(-) create mode 100644 drivers/net/mhi/proto_mbim.c -- 2.7.4 diff --git a/drivers/net/mhi/Makefile b/drivers/net/mhi/Makefile index 0acf989..f71b9f8 100644 --- a/drivers/net/mhi/Makefile +++ b/drivers/net/mhi/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_MHI_NET) += mhi_net.o -mhi_net-y := net.o +mhi_net-y := net.o proto_mbim.o diff --git a/drivers/net/mhi/mhi.h b/drivers/net/mhi/mhi.h index 82210e0..12e7407 100644 --- a/drivers/net/mhi/mhi.h +++ b/drivers/net/mhi/mhi.h @@ -28,6 +28,7 @@ struct mhi_net_dev { struct delayed_work rx_refill; struct mhi_net_stats stats; u32 rx_queue_sz; + int msg_enable; }; struct mhi_net_proto { @@ -35,3 +36,5 @@ struct mhi_net_proto { struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb); }; + +extern const struct mhi_net_proto proto_mbim; diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c index 44cbfb3..f599608 100644 --- a/drivers/net/mhi/net.c +++ b/drivers/net/mhi/net.c @@ -373,11 +373,18 @@ static const struct mhi_device_info mhi_swip0 = { .netname = "mhi_swip%d", }; +static const struct mhi_device_info mhi_hwip0_mbim = { + .netname = "mhi_mbim%d", + .proto = &proto_mbim, +}; + static const struct mhi_device_id mhi_net_id_table[] = { /* Hardware accelerated data PATH (to modem IPA), protocol agnostic */ { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)&mhi_hwip0 }, /* Software data PATH (to modem CPU) */ { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)&mhi_swip0 }, + /* Hardware accelerated data PATH (to modem IPA), MBIM protocol */ + { .chan = "IP_HW0_MBIM", .driver_data = (kernel_ulong_t)&mhi_hwip0_mbim }, {} }; MODULE_DEVICE_TABLE(mhi, mhi_net_id_table); diff --git a/drivers/net/mhi/proto_mbim.c b/drivers/net/mhi/proto_mbim.c new file mode 100644 index 0000000..dd6772e --- /dev/null +++ b/drivers/net/mhi/proto_mbim.c @@ -0,0 +1,294 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* MHI Network driver - Network over MHI bus + * + * Copyright (C) 2021 Linaro Ltd + * + * This driver copy some code from cdc_ncm, which is: + * Copyright (C) ST-Ericsson 2010-2012 + * and cdc_mbim, which is: + * Copyright (c) 2012 Smith Micro Software, Inc. + * Copyright (c) 2012 Bjørn Mork + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mhi.h" + +#define MBIM_NDP16_SIGN_MASK cpu_to_le32(0x00ffffff) + +struct mbim_context { + u16 rx_seq; + u16 tx_seq; +}; + +static void __mbim_length_errors_inc(struct mhi_net_dev *dev) +{ + u64_stats_update_begin(&dev->stats.rx_syncp); + u64_stats_inc(&dev->stats.rx_length_errors); + u64_stats_update_end(&dev->stats.rx_syncp); +} + +static void __mbim_errors_inc(struct mhi_net_dev *dev) +{ + u64_stats_update_begin(&dev->stats.rx_syncp); + u64_stats_inc(&dev->stats.rx_errors); + u64_stats_update_end(&dev->stats.rx_syncp); +} + +static int mbim_rx_verify_nth16(struct sk_buff *skb) +{ + struct mhi_net_dev *dev = netdev_priv(skb->dev); + struct mbim_context *ctx = dev->proto_data; + struct usb_cdc_ncm_nth16 *nth16; + int len; + + if (skb->len < sizeof(struct usb_cdc_ncm_nth16) + + sizeof(struct usb_cdc_ncm_ndp16)) { + netif_dbg(dev, rx_err, dev->ndev, "frame too short\n"); + __mbim_length_errors_inc(dev); + return -EINVAL; + } + + nth16 = (struct usb_cdc_ncm_nth16 *)skb->data; + + if (nth16->dwSignature != cpu_to_le32(USB_CDC_NCM_NTH16_SIGN)) { + netif_dbg(dev, rx_err, dev->ndev, + "invalid NTH16 signature <%#010x>\n", + le32_to_cpu(nth16->dwSignature)); + __mbim_errors_inc(dev); + return -EINVAL; + } + + /* No limit on the block length, except the size of the data pkt */ + len = le16_to_cpu(nth16->wBlockLength); + if (len > skb->len) { + netif_dbg(dev, rx_err, dev->ndev, + "NTB does not fit into the skb %u/%u\n", len, + skb->len); + __mbim_length_errors_inc(dev); + return -EINVAL; + } + + if (ctx->rx_seq + 1 != le16_to_cpu(nth16->wSequence) && + (ctx->rx_seq || le16_to_cpu(nth16->wSequence)) && + !(ctx->rx_seq == 0xffff && !le16_to_cpu(nth16->wSequence))) { + netif_dbg(dev, rx_err, dev->ndev, + "sequence number glitch prev=%d curr=%d\n", + ctx->rx_seq, le16_to_cpu(nth16->wSequence)); + } + ctx->rx_seq = le16_to_cpu(nth16->wSequence); + + return le16_to_cpu(nth16->wNdpIndex); +} + +static int mbim_rx_verify_ndp16(struct sk_buff *skb, int ndpoffset) +{ + struct mhi_net_dev *dev = netdev_priv(skb->dev); + struct usb_cdc_ncm_ndp16 *ndp16; + int ret; + + if (ndpoffset + sizeof(struct usb_cdc_ncm_ndp16) > skb->len) { + netif_dbg(dev, rx_err, dev->ndev, "invalid NDP offset <%u>\n", + ndpoffset); + return -EINVAL; + } + + ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset); + + if (le16_to_cpu(ndp16->wLength) < USB_CDC_NCM_NDP16_LENGTH_MIN) { + netif_dbg(dev, rx_err, dev->ndev, "invalid DPT16 length <%u>\n", + le16_to_cpu(ndp16->wLength)); + return -EINVAL; + } + + ret = ((le16_to_cpu(ndp16->wLength) - + sizeof(struct usb_cdc_ncm_ndp16)) / + sizeof(struct usb_cdc_ncm_dpe16)); + ret--; /* Last entry is always a NULL terminator */ + + if ((sizeof(struct usb_cdc_ncm_ndp16) + + ret * (sizeof(struct usb_cdc_ncm_dpe16))) > skb->len) { + netif_dbg(dev, rx_err, dev->ndev, + "Invalid nframes = %d\n", ret); + return -EINVAL; + } + + return ret; +} + +static void mbim_rx(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb) +{ + struct net_device *ndev = mhi_netdev->ndev; + int ndpoffset; + + if (skb_linearize(skb)) + goto error; + + /* Check NTB header and retrieve first NDP offset */ + ndpoffset = mbim_rx_verify_nth16(skb); + if (ndpoffset < 0) { + net_err_ratelimited("%s: Incorrect NTB header\n", ndev->name); + goto error; + } + + /* Process each NDP */ + while (1) { + struct usb_cdc_ncm_ndp16 *ndp16; + struct usb_cdc_ncm_dpe16 *dpe16; + int nframes, n; + + /* Check NDP header and retrieve number of datagrams */ + nframes = mbim_rx_verify_ndp16(skb, ndpoffset); + if (nframes < 0) { + net_err_ratelimited("%s: Incorrect NDP16\n", ndev->name); + __mbim_length_errors_inc(mhi_netdev); + goto error; + } + + /* Only IP data type supported, no DSS in MHI context */ + ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset); + if ((ndp16->dwSignature & MBIM_NDP16_SIGN_MASK) + != USB_CDC_MBIM_NDP16_IPS_SIGN) { + net_err_ratelimited("%s: Unsupported NDP type\n", ndev->name); + __mbim_errors_inc(mhi_netdev); + goto next_ndp; + } + + /* Only primary IP session 0 (0x00) supported for now */ + if (ndp16->dwSignature & ~MBIM_NDP16_SIGN_MASK) { + net_err_ratelimited("%s: bad packet session\n", ndev->name); + __mbim_errors_inc(mhi_netdev); + goto next_ndp; + } + + /* de-aggregate and deliver IP packets */ + dpe16 = ndp16->dpe16; + for (n = 0; n < nframes; n++, dpe16++) { + u16 dgram_offset = le16_to_cpu(dpe16->wDatagramIndex); + u16 dgram_len = le16_to_cpu(dpe16->wDatagramLength); + struct sk_buff *skbn; + + if (!dgram_offset || !dgram_len) + break; /* null terminator */ + + skbn = netdev_alloc_skb(ndev, dgram_len); + if (!skbn) + continue; + + skb_put(skbn, dgram_len); + memcpy(skbn->data, skb->data + dgram_offset, dgram_len); + + switch (skbn->data[0] & 0xf0) { + case 0x40: + skbn->protocol = htons(ETH_P_IP); + break; + case 0x60: + skbn->protocol = htons(ETH_P_IPV6); + break; + default: + net_err_ratelimited("%s: unknown protocol\n", + ndev->name); + __mbim_errors_inc(mhi_netdev); + dev_kfree_skb_any(skbn); + continue; + } + + netif_rx(skbn); + } +next_ndp: + /* Other NDP to process? */ + ndpoffset = le16_to_cpu(ndp16->wNextNdpIndex); + if (!ndpoffset) + break; + } + + /* free skb */ + dev_consume_skb_any(skb); + return; +error: + dev_kfree_skb_any(skb); +} + +struct mbim_tx_hdr { + struct usb_cdc_ncm_nth16 nth16; + struct usb_cdc_ncm_ndp16 ndp16; + struct usb_cdc_ncm_dpe16 dpe16[2]; +} __packed; + +static struct sk_buff *mbim_tx_fixup(struct mhi_net_dev *mhi_netdev, + struct sk_buff *skb) +{ + struct mbim_context *ctx = mhi_netdev->proto_data; + unsigned int dgram_size = skb->len; + struct usb_cdc_ncm_nth16 *nth16; + struct usb_cdc_ncm_ndp16 *ndp16; + struct mbim_tx_hdr *mbim_hdr; + + /* For now, this is a partial implementation of CDC MBIM, only one NDP + * is sent, containing the IP packet (no aggregation). + */ + + /* Ensure we have enough headroom for crafting MBIM header */ + if (skb_cow_head(skb, sizeof(struct mbim_tx_hdr))) { + dev_kfree_skb_any(skb); + return NULL; + } + + mbim_hdr = skb_push(skb, sizeof(struct mbim_tx_hdr)); + + /* Fill NTB header */ + nth16 = &mbim_hdr->nth16; + nth16->dwSignature = cpu_to_le32(USB_CDC_NCM_NTH16_SIGN); + nth16->wHeaderLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16)); + nth16->wSequence = cpu_to_le16(ctx->tx_seq++); + nth16->wBlockLength = cpu_to_le16(skb->len); + nth16->wNdpIndex = cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16)); + + /* Fill the unique NDP */ + ndp16 = &mbim_hdr->ndp16; + ndp16->dwSignature = cpu_to_le32(USB_CDC_MBIM_NDP16_IPS_SIGN); + ndp16->wLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_ndp16) + + sizeof(struct usb_cdc_ncm_dpe16) * 2); + ndp16->wNextNdpIndex = 0; + + /* Datagram follows the mbim header */ + ndp16->dpe16[0].wDatagramIndex = cpu_to_le16(sizeof(struct mbim_tx_hdr)); + ndp16->dpe16[0].wDatagramLength = cpu_to_le16(dgram_size); + + /* null termination */ + ndp16->dpe16[1].wDatagramIndex = 0; + ndp16->dpe16[1].wDatagramLength = 0; + + return skb; +} + +static int mbim_init(struct mhi_net_dev *mhi_netdev) +{ + struct net_device *ndev = mhi_netdev->ndev; + + mhi_netdev->proto_data = devm_kzalloc(&ndev->dev, + sizeof(struct mbim_context), + GFP_KERNEL); + if (!mhi_netdev->proto_data) + return -ENOMEM; + + ndev->needed_headroom = sizeof(struct mbim_tx_hdr); + + return 0; +} + +const struct mhi_net_proto proto_mbim = { + .init = mbim_init, + .rx = mbim_rx, + .tx_fixup = mbim_tx_fixup, +};