diff mbox series

[v3] aquantia: Remove the build_skb path

Message ID CY4PR1001MB231116E9371FBA2B8636C23DE8E00@CY4PR1001MB2311.namprd10.prod.outlook.com
State Superseded
Headers show
Series [v3] aquantia: Remove the build_skb path | expand

Commit Message

Ramsay, Lincoln Nov. 19, 2020, 10:34 p.m. UTC
When performing IPv6 forwarding, there is an expectation that SKBs
will have some headroom. When forwarding a packet from the aquantia
driver, this does not always happen, triggering a kernel warning.

The build_skb path fails to allow for an SKB header, but the hardware
buffer it is built around won't allow for this anyway. Just always use the
slower codepath that copies memory into an allocated SKB.

Signed-off-by: Lincoln Ramsay <lincoln.ramsay@opengear.com>
---

We have an Aquantia 10G ethernet interface in one of our devices. While testing a new feature, we discovered a problem with it. The problem only shows up in a very specific situation however.

We are using firewalld as a frontend to nftables.
It sets up port forwarding (eg. incoming port 5022 -> other_machine:22).
We also use masquerading on the outgoing packet, although I'm not sure this is relevant to the issue.
IPv4 works fine, IPv6 is a problem.
The bug is triggered by trying to hit this forwarded port (ssh -p 5022 addr). It is 100% reproducible.

The problem is that we get a kernel warning. It is triggered by this line in neighbour.h:
    if (WARN_ON_ONCE(skb_headroom(skb) < hh_alen)) {

It seems that skb_headroom is only 14, when it is expected to be >= 16.

2020-10-19 21:24:24 DEBUG   [console] ------------[ cut here ]------------
2020-10-19 21:24:24 DEBUG   [console] WARNING: CPU: 3 PID: 0 at include/net/neighbour.h:493 ip6_finish_output2+0x538/0x580
2020-10-19 21:24:24 DEBUG   [console] Modules linked in: xt_addrtype xt_MASQUERADE iptable_filter iptable_nat ip6table_raw ip6_tables xt_CT xt_tcpudp iptable_raw ip_tables nf_nat_tftp nft_nat nft_masq nft_objref nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_chain_nat nf_nat xfrm_user nf_conntrack_tftp nf_tables_set x_tables nft_ct nf_tables nfnetlink amd_spirom_nor(O) spi_nor(O) mtd(O) atlantic nct5104_wdt(O) gpio_amd(O) nct7491(O) sch_fq_codel tun qmi_wwan usbnet mii qcserial usb_wwan qcaux nsh nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 i2c_dev cdc_wdm br_netfilter bridge stp llc [last unloaded: nft_reject]
2020-10-19 21:24:24 DEBUG   [console] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G           O      5.4.65-og #1
2020-10-19 21:24:24 DEBUG   [console] RIP: 0010:ip6_finish_output2+0x538/0x580
2020-10-19 21:24:24 DEBUG   [console] Code: 87 e9 fc ff ff 44 89 fa 48 89 74 24 20 48 29 d7 e8 2d 4f 0c 00 48 8b 74 24 20 e9 cf fc ff ff 41 bf 10 00 00 00 e9 c4 fc ff ff <0f> 0b 4c 89 ef 41 bc 01 00 00 00 e8 d8 89 f0 ff e9 ee fc ff ff e8
2020-10-19 21:24:24 DEBUG   [console] RSP: 0018:ffffac2040114ab0 EFLAGS: 00010212
2020-10-19 21:24:24 DEBUG   [console] RAX: ffff9c041a0bf00e RBX: 000000000000000e RCX: ffff9c041a0bf00e
2020-10-19 21:24:24 DEBUG   [console] RDX: 000000000000000e RSI: ffff9c03ddf606c8 RDI: 0000000000000000
2020-10-19 21:24:24 DEBUG   [console] RBP: ffffac2040114b38 R08: 00000000f2000000 R09: 0000000002ec5955
2020-10-19 21:24:24 DEBUG   [console] R10: ffff9c041e57a440 R11: 000000000000000a R12: ffff9c03ddf60600
2020-10-19 21:24:24 DEBUG   [console] R13: ffff9c03dcf24800 R14: 0000000000000000 R15: 0000000000000010
2020-10-19 21:24:24 DEBUG   [console] FS:  0000000000000000(0000) GS:ffff9c0426b80000(0000) knlGS:0000000000000000
2020-10-19 21:24:24 DEBUG   [console] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2020-10-19 21:24:24 DEBUG   [console] CR2: 0000000000a0b4d8 CR3: 0000000222054000 CR4: 00000000000406e0
2020-10-19 21:24:24 DEBUG   [console] Call Trace:
2020-10-19 21:24:24 DEBUG   [console]  <IRQ>
2020-10-19 21:24:24 DEBUG   [console]  ? ipv6_confirm+0x85/0xf0 [nf_conntrack]
2020-10-19 21:24:24 DEBUG   [console]  ip6_output+0x67/0x130
2020-10-19 21:24:24 DEBUG   [console]  ? __ip6_finish_output+0x110/0x110
2020-10-19 21:24:24 DEBUG   [console]  ip6_forward+0x582/0x920
2020-10-19 21:24:24 DEBUG   [console]  ? ip6_frag_init+0x40/0x40
2020-10-19 21:24:24 DEBUG   [console]  ip6_sublist_rcv_finish+0x33/0x50
2020-10-19 21:24:24 DEBUG   [console]  ip6_sublist_rcv+0x212/0x240
2020-10-19 21:24:24 DEBUG   [console]  ? ip6_rcv_finish_core.isra.0+0xc0/0xc0
2020-10-19 21:24:24 DEBUG   [console]  ipv6_list_rcv+0x116/0x140
2020-10-19 21:24:24 DEBUG   [console]  __netif_receive_skb_list_core+0x1b1/0x260
2020-10-19 21:24:24 DEBUG   [console]  netif_receive_skb_list_internal+0x1ba/0x2d0
2020-10-19 21:24:24 DEBUG   [console]  ? napi_gro_receive+0x50/0x90
2020-10-19 21:24:24 DEBUG   [console]  gro_normal_list.part.0+0x14/0x30
2020-10-19 21:24:24 DEBUG   [console]  napi_complete_done+0x81/0x100
2020-10-19 21:24:24 DEBUG   [console]  aq_vec_poll+0x166/0x190 [atlantic]
2020-10-19 21:24:24 DEBUG   [console]  net_rx_action+0x12b/0x2f0
2020-10-19 21:24:24 DEBUG   [console]  __do_softirq+0xd1/0x213
2020-10-19 21:24:24 DEBUG   [console]  irq_exit+0xc8/0xd0
2020-10-19 21:24:24 DEBUG   [console]  do_IRQ+0x48/0xd0
2020-10-19 21:24:24 DEBUG   [console]  common_interrupt+0xf/0xf
2020-10-19 21:24:24 DEBUG   [console]  </IRQ>
2020-10-19 21:24:24 DEBUG   [console] ---[ end trace c1cba758301d342f ]---

After much hunting and debugging, I think I have figured out the issue here.

aq_ring.c has this code (edited slightly for brevity):

if (buff->is_eop && buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
    skb = build_skb(aq_buf_vaddr(&buff->rxdata), AQ_CFG_RX_FRAME_MAX);
    skb_put(skb, buff->len);
} else {
    skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);

There is a significant difference between the SKB produced by these 2 code paths. When napi_alloc_skb creates an SKB, there is a certain amount of headroom reserved. The same pattern appears to be used in all of the other ethernet drivers I have looked at. However, this is not done in the build_skb codepath.

I believe that this is the ultimate cause of the warning we are seeing.

My original proposed patch followed a pattern used in the igb driver. Some extra space in the buffer was used to hold the SKB headroom. However, this is problematic because the hardware doesn't understand this and it may overwrite data. This patch removes the build_skb path entirely, avoiding the issue. It was tested as a patch against Linux 5.8.


 .../net/ethernet/aquantia/atlantic/aq_ring.c  | 127 ++++++++----------
 1 file changed, 53 insertions(+), 74 deletions(-)

Comments

Florian Westphal Nov. 19, 2020, 10:58 p.m. UTC | #1
Ramsay, Lincoln <Lincoln.Ramsay@digi.com> wrote:
> When performing IPv6 forwarding, there is an expectation that SKBs
> will have some headroom. When forwarding a packet from the aquantia
> driver, this does not always happen, triggering a kernel warning.
> 
> The build_skb path fails to allow for an SKB header, but the hardware

For build_skb path to work the buffer scheme would need to be changed
to reserve headroom, so yes, I think that the proposed patch is the
most convenient solution.

> buffer it is built around won't allow for this anyway. Just always use the
> slower codepath that copies memory into an allocated SKB.

I thought this changes the driver to always copy the entire packet, but
thats not true, see below.

> It seems that skb_headroom is only 14, when it is expected to be >= 16.

Yes, kernel expects to have some headroom in skbs.

> aq_ring.c has this code (edited slightly for brevity):
> 
> if (buff->is_eop && buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
>     skb = build_skb(aq_buf_vaddr(&buff->rxdata), AQ_CFG_RX_FRAME_MAX);
>     skb_put(skb, buff->len);
> } else {
>     skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);
> 
> There is a significant difference between the SKB produced by these 2 code paths. When napi_alloc_skb creates an SKB, there is a certain amount of headroom reserved. The same pattern appears to be used in all of the other ethernet drivers I have looked at. However, this is not done in the build_skb codepath.

I think the above should be part of the commit message rather than this
meta-space (which gets removed by git-am).

> +		skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);
> +		if (unlikely(!skb)) {

AQ_CFG_RX_HDR_SIZE is 256 byte, so for larger packets ...

> +		memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata),
> +			ALIGN(hdr_len, sizeof(long)));

This only copies the initial part and then...
> +		if (buff->len - hdr_len > 0) {
> +			skb_add_rx_frag(skb, 0, buff->rxdata.page,
> +					buff->rxdata.pg_off + hdr_len,
> +					buff->len - hdr_len,
>  					AQ_CFG_RX_FRAME_MAX);

The rest is added as a frag.

IOW, this patch looks good to me, but could you update the
commit message so it becomes clear that this doesn't result in a full
copy?

Perhaps something like:
'Just always use the napi_alloc_skb() code path that passes the buffer
 as a page fragment', or similar.

Thanks.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
index 4f913658eea4..425e8e5afec7 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
@@ -413,85 +413,64 @@  int aq_ring_rx_clean(struct aq_ring_s *self,
 					      buff->rxdata.pg_off,
 					      buff->len, DMA_FROM_DEVICE);
 
-		/* for single fragment packets use build_skb() */
-		if (buff->is_eop &&
-		    buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
-			skb = build_skb(aq_buf_vaddr(&buff->rxdata),
+		skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);
+		if (unlikely(!skb)) {
+			u64_stats_update_begin(&self->stats.rx.syncp);
+			self->stats.rx.skb_alloc_fails++;
+			u64_stats_update_end(&self->stats.rx.syncp);
+			err = -ENOMEM;
+			goto err_exit;
+		}
+		if (is_ptp_ring)
+			buff->len -=
+				aq_ptp_extract_ts(self->aq_nic, skb,
+					aq_buf_vaddr(&buff->rxdata),
+					buff->len);
+
+		hdr_len = buff->len;
+		if (hdr_len > AQ_CFG_RX_HDR_SIZE)
+			hdr_len = eth_get_headlen(skb->dev,
+							aq_buf_vaddr(&buff->rxdata),
+							AQ_CFG_RX_HDR_SIZE);
+
+		memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata),
+			ALIGN(hdr_len, sizeof(long)));
+
+		if (buff->len - hdr_len > 0) {
+			skb_add_rx_frag(skb, 0, buff->rxdata.page,
+					buff->rxdata.pg_off + hdr_len,
+					buff->len - hdr_len,
 					AQ_CFG_RX_FRAME_MAX);
-			if (unlikely(!skb)) {
-				u64_stats_update_begin(&self->stats.rx.syncp);
-				self->stats.rx.skb_alloc_fails++;
-				u64_stats_update_end(&self->stats.rx.syncp);
-				err = -ENOMEM;
-				goto err_exit;
-			}
-			if (is_ptp_ring)
-				buff->len -=
-					aq_ptp_extract_ts(self->aq_nic, skb,
-						aq_buf_vaddr(&buff->rxdata),
-						buff->len);
-			skb_put(skb, buff->len);
 			page_ref_inc(buff->rxdata.page);
-		} else {
-			skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);
-			if (unlikely(!skb)) {
-				u64_stats_update_begin(&self->stats.rx.syncp);
-				self->stats.rx.skb_alloc_fails++;
-				u64_stats_update_end(&self->stats.rx.syncp);
-				err = -ENOMEM;
-				goto err_exit;
-			}
-			if (is_ptp_ring)
-				buff->len -=
-					aq_ptp_extract_ts(self->aq_nic, skb,
-						aq_buf_vaddr(&buff->rxdata),
-						buff->len);
-
-			hdr_len = buff->len;
-			if (hdr_len > AQ_CFG_RX_HDR_SIZE)
-				hdr_len = eth_get_headlen(skb->dev,
-							  aq_buf_vaddr(&buff->rxdata),
-							  AQ_CFG_RX_HDR_SIZE);
-
-			memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata),
-			       ALIGN(hdr_len, sizeof(long)));
-
-			if (buff->len - hdr_len > 0) {
-				skb_add_rx_frag(skb, 0, buff->rxdata.page,
-						buff->rxdata.pg_off + hdr_len,
-						buff->len - hdr_len,
-						AQ_CFG_RX_FRAME_MAX);
-				page_ref_inc(buff->rxdata.page);
-			}
+		}
 
-			if (!buff->is_eop) {
-				buff_ = buff;
-				i = 1U;
-				do {
-					next_ = buff_->next,
-					buff_ = &self->buff_ring[next_];
+		if (!buff->is_eop) {
+			buff_ = buff;
+			i = 1U;
+			do {
+				next_ = buff_->next,
+				buff_ = &self->buff_ring[next_];
 
-					dma_sync_single_range_for_cpu(
-							aq_nic_get_dev(self->aq_nic),
-							buff_->rxdata.daddr,
-							buff_->rxdata.pg_off,
-							buff_->len,
-							DMA_FROM_DEVICE);
-					skb_add_rx_frag(skb, i++,
-							buff_->rxdata.page,
-							buff_->rxdata.pg_off,
-							buff_->len,
-							AQ_CFG_RX_FRAME_MAX);
-					page_ref_inc(buff_->rxdata.page);
-					buff_->is_cleaned = 1;
-
-					buff->is_ip_cso &= buff_->is_ip_cso;
-					buff->is_udp_cso &= buff_->is_udp_cso;
-					buff->is_tcp_cso &= buff_->is_tcp_cso;
-					buff->is_cso_err |= buff_->is_cso_err;
+				dma_sync_single_range_for_cpu(
+						aq_nic_get_dev(self->aq_nic),
+						buff_->rxdata.daddr,
+						buff_->rxdata.pg_off,
+						buff_->len,
+						DMA_FROM_DEVICE);
+				skb_add_rx_frag(skb, i++,
+						buff_->rxdata.page,
+						buff_->rxdata.pg_off,
+						buff_->len,
+						AQ_CFG_RX_FRAME_MAX);
+				page_ref_inc(buff_->rxdata.page);
+				buff_->is_cleaned = 1;
 
-				} while (!buff_->is_eop);
-			}
+				buff->is_ip_cso &= buff_->is_ip_cso;
+				buff->is_udp_cso &= buff_->is_udp_cso;
+				buff->is_tcp_cso &= buff_->is_tcp_cso;
+				buff->is_cso_err |= buff_->is_cso_err;
+
+			} while (!buff_->is_eop);
 		}
 
 		if (buff->is_vlan)