mbox series

[v3,0/5] ARM: Add GXP UMAC Support

Message ID 20230816215220.114118-1-nick.hawkins@hpe.com
Headers show
Series ARM: Add GXP UMAC Support | expand

Message

Hawkins, Nick Aug. 16, 2023, 9:52 p.m. UTC
From: Nick Hawkins <nick.hawkins@hpe.com>

The GXP contains two Ethernet MACs that can be
connected externally to several physical devices. From an external
interface perspective the BMC provides two SERDES interface connections
capable of either SGMII or 1000Base-X operation. The BMC also provides
a RMII interface for sideband connections to external Ethernet controllers.

The primary MAC (umac0) can be mapped to either SGMII/1000-BaseX
SERDES interface.  The secondary MAC (umac1) can be mapped to only
the second SGMII/1000-Base X Serdes interface or it can be mapped for
RMII sideband.

The MDIO(mdio0) interface from the primary MAC (umac0) is used for
external PHY status. The MDIO(mdio1) interface from
the secondary MAC (umac1) is routed to the SGMII/100Base-X IP blocks
on the two SERDES interface connections. In most cases the internal
phy connects directly to the external phy.

---

Changes since v2:
 *Removed PHY Configuration from MAC driver to Bootloader
 *Fixed several issues with the Kconfig and Makefile for both
  the mdio and umac driver.
 *Fixed code alignment where applicable
 *Removed MDIO from MAC yaml.
 *Added description to explain the use-ncsi use case.
 *Removed multiple unecessary functions and function calls
  from MAC driver

Changes since v1:
 *Corrected improper descriptions and use of | in yaml files
 *Used reverse christmas tree format for network drivers
 *Moved gxp-umac-mdio.c to /mdio/
 *Fixed dependencies on both Kconfigs
 *Added COMPILE_TEST to both Kconfigs
 *Used devm_ functions where possible in both drivers
 *Moved mac-address to inside of port in yaml files
 *Exchanged listing individual yaml files for hpe,gxp*
 *Restricted use of le32

Nick Hawkins (5):
  dt-bindings: net: Add HPE GXP UMAC MDIO
  net: hpe: Add GXP UMAC MDIO
  dt-bindings: net: Add HPE GXP UMAC
  net: hpe: Add GXP UMAC Driver
  MAINTAINERS: HPE: Add GXP UMAC Networking Files

 .../bindings/net/hpe,gxp-umac-mdio.yaml       |  50 ++
 .../devicetree/bindings/net/hpe,gxp-umac.yaml |  97 +++
 MAINTAINERS                                   |   2 +
 drivers/net/ethernet/Kconfig                  |   1 +
 drivers/net/ethernet/Makefile                 |   1 +
 drivers/net/ethernet/hpe/Kconfig              |  32 +
 drivers/net/ethernet/hpe/Makefile             |   1 +
 drivers/net/ethernet/hpe/gxp-umac.c           | 759 ++++++++++++++++++
 drivers/net/ethernet/hpe/gxp-umac.h           |  89 ++
 drivers/net/mdio/Kconfig                      |  13 +
 drivers/net/mdio/Makefile                     |   1 +
 drivers/net/mdio/mdio-gxp-umac.c              | 142 ++++
 12 files changed, 1188 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/hpe,gxp-umac-mdio.yaml
 create mode 100644 Documentation/devicetree/bindings/net/hpe,gxp-umac.yaml
 create mode 100644 drivers/net/ethernet/hpe/Kconfig
 create mode 100644 drivers/net/ethernet/hpe/Makefile
 create mode 100644 drivers/net/ethernet/hpe/gxp-umac.c
 create mode 100644 drivers/net/ethernet/hpe/gxp-umac.h
 create mode 100644 drivers/net/mdio/mdio-gxp-umac.c

Comments

Hawkins, Nick Aug. 17, 2023, 7:19 p.m. UTC | #1
Hi Andrew,

> > +static int umac_init_hw(struct net_device *ndev)


> > +{
> > +	} else {
> > +		value |= UMAC_CFG_FULL_DUPLEX;
> > +
> > +		if (ndev->phydev->speed == SPEED_1000) {

> I'm pretty sure i pointed this out once before. It is not safe to
> access phydev members outside of the adjust link callback.

My mistake I missed this phydev member access. I will rely on
the adjust link callback to configure it.

> > +static int umac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
> > +{
> > +	struct umac_priv *umac = netdev_priv(ndev);
> > +	struct umac_tx_desc_entry *ptxdesc;
> > +	unsigned int length;
> > +	u8 *pframe;
> > +
> > +	ptxdesc = &umac->tx_descs->entrylist[umac->tx_cur];
> > +	pframe = umac->tx_descs->framelist[umac->tx_cur];
> > +
> > +	length = skb->len;
> > +	if (length > 1514) {
> > +		netdev_err(ndev, "send data %d bytes > 1514, clamp it to 1514\n",
> > +			   skb->len);

> Than should be rate limited.

How is this done? Is there a particular function to call that
will handle it in the backend?
I thought the rate limiting is happening in this
function already below at:

/* if current tx ring buffer is full, stop the queue */
        ptxdesc = &umac->tx_descs->entrylist[umac->tx_cur];
        if (ptxdesc->status & UMAC_RING_ENTRY_HW_OWN)
                netif_stop_queue(ndev);

> Also, if you chop the end of the packet, it is going to be useless. It
> is better to drop it, to improve your goodput.

I will drop it.

Thanks,

-Nick Hawkins
Andrew Lunn Aug. 18, 2023, 9:15 p.m. UTC | #2
> Would this be the #include <linux/dmapool.h> library?

<include/net/page_pool/helpers.h>

Take a look at driver/net/ethernet/freescale/fec_main.c That
driver/device is of similar complexity to yours. It had a recent
change from its own buffer management to page pool. It
started with

commit 95698ff6177b5f1f13f251da60e7348413046ae4
Author: Shenwei Wang <shenwei.wang@nxp.com>
Date:   Fri Sep 30 15:44:27 2022 -0500

    net: fec: using page pool to manage RX buffers

but there are additional patches later.

    Andrew
Andrew Lunn Aug. 22, 2023, 8 p.m. UTC | #3
On Tue, Aug 22, 2023 at 07:00:49PM +0000, Hawkins, Nick wrote:
> 
> > <include/net/page_pool/helpers.h>
> 
> Hi Andrew,
> 
> I can't seem to find this file in linux master. Where is it?

~/linux$ ls include/net/page_pool/helpers.h
include/net/page_pool/helpers.h

When you say master, do you mean net-next/main? This is a network
driver, so you should be based on top of that tree.

https://www.kernel.org/doc/html/latest/process/maintainer-netdev.html#netdev-faq

> > Take a look at driver/net/ethernet/freescale/fec_main.c That
> > driver/device is of similar complexity to yours. It had a recent
> > change from its own buffer management to page pool. It
> > started with
> 
> I have looked over this driver and have a couple questions
> about the pages in general.
> 
> How do I determine what the correct pool size should be for the
> RX and TX?

There has been some recent discussion about that. Search the netdev
list over the last couple of week. 

> I must admit I am not familiar with XDP.
> Is it required for the page pool library?

Nope, not required at all. The FEC driver was first converted to page
pool, and then XDP support added. The conversion to page pool made the
driver faster, it could handle more packets per second. That is why i
suggested using it, plus it means less driver code, which means less
bugs.

	Andrew
Hawkins, Nick Aug. 25, 2023, 6:54 p.m. UTC | #4
Hi Andrew,

Thank you for pointing me at the correct repo to use. I was using the
incorrect one.

> Nope, not required at all. The FEC driver was first converted to page
> pool, and then XDP support added. The conversion to page pool made the
> driver faster, it could handle more packets per second. That is why i
> suggested using it, plus it means less driver code, which means less
> bugs.

I have been trying to figure out how exactly I can translate the current code
over to using the page pool api over the past week. It seems like it is quiet
a complex change. As the driver seems to be keeping up with our
performance requirements would it be acceptable to mark this as a
TODO: for a future enhancement?

Thank you for the assistance,

-Nick Hawkins
Andrew Lunn Sept. 12, 2023, 11:52 p.m. UTC | #5
> Greetings Andrew,
> 
> In that case I will continue to attempt to try and adopt the page pool
> API. In all the examples with page pool HW rings it appears they are
> using alloc_etherdev_mqs. Are there any HW requirements to use this
> library? If there are none what is the typical number for rx and tx
> queues?

There are no hardware requirements as far as i understand it. If your
hardware only has one RX queue and one TX queue, define it as
1. Having more allows you to spread the load over multiple CPUs, with
each queue typically having its own interrupt, and interrupts are then
mapped to a single CPU. But if you don't have any of that, it should
not be a hindrance.

    Andrew