mbox series

[net,0/4] napi_gro_receive caller return value cleanups

Message ID 20200624220606.1390542-1-Jason@zx2c4.com
Headers show
Series napi_gro_receive caller return value cleanups | expand

Message

Jason A. Donenfeld June 24, 2020, 10:06 p.m. UTC
In 6570bc79c0df ("net: core: use listified Rx for GRO_NORMAL in
napi_gro_receive()"), the GRO_NORMAL case stopped calling
netif_receive_skb_internal, checking its return value, and returning
GRO_DROP in case it failed. Instead, it calls into
netif_receive_skb_list_internal (after a bit of indirection), which
doesn't return any error. Therefore, napi_gro_receive will never return
GRO_DROP, making handling GRO_DROP dead code.

I emailed the author of 6570bc79c0df on netdev [1] to see if this change
was intentional, but the dlink.ru email address has been disconnected,
and looking a bit further myself, it seems somewhat infeasible to start
propagating return values backwards from the internal machinations of
netif_receive_skb_list_internal.

Taking a look at all the callers of napi_gro_receive, it appears that
three are checking the return value for the purpose of comparing it to
the now never-happening GRO_DROP, and one just casts it to (void), a
likely historical leftover. Every other of the 120 callers does not
bother checking the return value.

And it seems like these remaining 116 callers are doing the right thing:
after calling napi_gro_receive, the packet is now in the hands of the
upper layers of the newtworking, and the device driver itself has no
business now making decisions based on what the upper layers choose to
do. Incrementing stats counters on GRO_DROP seems like a mistake, made
by these three drivers, but not by the remaining 117.

It would seem, therefore, that after rectifying these four callers of
napi_gro_receive, that I should go ahead and just remove returning the
value from napi_gro_receive all together. However, napi_gro_receive has
a function event tracer, and being able to introspect into the
networking stack to see how often napi_gro_receive is returning whatever
interesting GRO status (aside from _DROP) remains an interesting
data point worth keeping for debugging.

So, this series simply gets rid of the return value checking for the
four useless places where that check never evaluates to anything
meaningful.

[1] https://lore.kernel.org/netdev/20200624210606.GA1362687@zx2c4.com/

Jason A. Donenfeld (4):
  wireguard: receive: account for napi_gro_receive never returning
    GRO_DROP
  socionext: account for napi_gro_receive never returning GRO_DROP
  hns: do not cast return value of napi_gro_receive to null
  wil6210: account for napi_gro_receive never returning GRO_DROP

 drivers/net/ethernet/hisilicon/hns/hns_enet.c |  2 +-
 drivers/net/ethernet/socionext/netsec.c       |  4 +-
 drivers/net/wireguard/receive.c               | 10 +----
 drivers/net/wireless/ath/wil6210/txrx.c       | 39 ++++++-------------
 4 files changed, 17 insertions(+), 39 deletions(-)

-- 
2.27.0

Comments

Edward Cree June 25, 2020, 2:31 p.m. UTC | #1
On 24/06/2020 23:06, Jason A. Donenfeld wrote:
> So, this series simply gets rid of the return value checking for the

> four useless places where that check never evaluates to anything

> meaningful.

I don't know much about the details of these drivers, but asfar as
 the general concept of the series is concerned,
Acked-by: Edward Cree <ecree@solarflare.com>

 as per my reply on the thread you linked.
David Miller June 25, 2020, 11:16 p.m. UTC | #2
From: "Jason A. Donenfeld" <Jason@zx2c4.com>

Date: Wed, 24 Jun 2020 16:06:02 -0600

> In 6570bc79c0df ("net: core: use listified Rx for GRO_NORMAL in

> napi_gro_receive()"), the GRO_NORMAL case stopped calling

> netif_receive_skb_internal, checking its return value, and returning

> GRO_DROP in case it failed. Instead, it calls into

> netif_receive_skb_list_internal (after a bit of indirection), which

> doesn't return any error. Therefore, napi_gro_receive will never return

> GRO_DROP, making handling GRO_DROP dead code.

> 

> I emailed the author of 6570bc79c0df on netdev [1] to see if this change

> was intentional, but the dlink.ru email address has been disconnected,

> and looking a bit further myself, it seems somewhat infeasible to start

> propagating return values backwards from the internal machinations of

> netif_receive_skb_list_internal.

> 

> Taking a look at all the callers of napi_gro_receive, it appears that

> three are checking the return value for the purpose of comparing it to

> the now never-happening GRO_DROP, and one just casts it to (void), a

> likely historical leftover. Every other of the 120 callers does not

> bother checking the return value.

> 

> And it seems like these remaining 116 callers are doing the right thing:

> after calling napi_gro_receive, the packet is now in the hands of the

> upper layers of the newtworking, and the device driver itself has no

> business now making decisions based on what the upper layers choose to

> do. Incrementing stats counters on GRO_DROP seems like a mistake, made

> by these three drivers, but not by the remaining 117.

> 

> It would seem, therefore, that after rectifying these four callers of

> napi_gro_receive, that I should go ahead and just remove returning the

> value from napi_gro_receive all together. However, napi_gro_receive has

> a function event tracer, and being able to introspect into the

> networking stack to see how often napi_gro_receive is returning whatever

> interesting GRO status (aside from _DROP) remains an interesting

> data point worth keeping for debugging.

> 

> So, this series simply gets rid of the return value checking for the

> four useless places where that check never evaluates to anything

> meaningful.

> 

> [1] https://lore.kernel.org/netdev/20200624210606.GA1362687@zx2c4.com/


Seires applied, thank you.