Message ID | 20240530132629.4180932-1-ofir.gal@volumez.com |
---|---|
State | Superseded |
Headers | show |
On 5/30/24 15:26, Ofir Gal wrote: > Network drivers are using sendpage_ok() to check the first page of an > iterator in order to disable MSG_SPLICE_PAGES. The iterator can > represent list of contiguous pages. > > When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used, > it requires all pages in the iterator to be sendable. Therefore it needs > to check that each page is sendable. > > The patch introduces a helper sendpages_ok(), it returns true if all the > contiguous pages are sendable. > > Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use > this helper to check whether the page list is OK. If the helper does not > return true, the driver should remove MSG_SPLICE_PAGES flag. > > Signed-off-by: Ofir Gal <ofir.gal@volumez.com> > --- > include/linux/net.h | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/include/linux/net.h b/include/linux/net.h > index 688320b79fcc..b33bdc3e2031 100644 > --- a/include/linux/net.h > +++ b/include/linux/net.h > @@ -322,6 +322,26 @@ static inline bool sendpage_ok(struct page *page) > return !PageSlab(page) && page_count(page) >= 1; > } > > +/* > + * Check sendpage_ok on contiguous pages. > + */ > +static inline bool sendpages_ok(struct page *page, size_t len, size_t offset) > +{ > + unsigned int pagecount; > + size_t page_offset; > + int k; > + > + page = page + offset / PAGE_SIZE; > + page_offset = offset % PAGE_SIZE; > + pagecount = DIV_ROUND_UP(len + page_offset, PAGE_SIZE); > + Don't we miss the first page for offset > PAGE_SIZE? I'd rather check for all pages from 'page' up to (offset + len), just to be on the safe side. > + for (k = 0; k < pagecount; k++) > + if (!sendpage_ok(page + k)) > + return false; > + > + return true; > +} > + > int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec, > size_t num, size_t len); > int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg, Cheers, Hannes
On 30/05/2024 20:58, Sagi Grimberg wrote: > Hey Ofir, > > On 30/05/2024 16:26, Ofir Gal wrote: >> skb_splice_from_iter() warns on !sendpage_ok() which results in nvme-tcp >> data transfer failure. This warning leads to hanging IO. >> >> nvme-tcp using sendpage_ok() to check the first page of an iterator in >> order to disable MSG_SPLICE_PAGES. The iterator can represent a list of >> contiguous pages. >> >> When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used, >> it requires all pages in the iterator to be sendable. >> skb_splice_from_iter() checks each page with sendpage_ok(). >> >> nvme_tcp_try_send_data() might allow MSG_SPLICE_PAGES when the first >> page is sendable, but the next one are not. skb_splice_from_iter() will >> attempt to send all the pages in the iterator. When reaching an >> unsendable page the IO will hang. > > Interesting. Do you know where this buffer came from? I find it strange > that a we get a bvec with a contiguous segment which consists of non slab > originated pages together with slab originated pages... it is surprising to see > a mix of the two. I find it strange as well, I haven't investigate the origin of the IO yet. I suspect the first 2 pages are the superblocks of the raid (mdp_superblock_1 and bitmap_super_s) and the rest of the IO is the bitmap. I have stumbled with the same issue when running xfs_format (couldn't reproduce it from scratch). I suspect there are others cases that mix the slab pages and non-slab pages. > I'm wandering if this is something that happened before david's splice_pages > changes. Maybe before that with multipage bvecs? Anyways it is strange, never > seen that. I haven't bisect the commit that caused the behavior but I have tested ubuntu with 6.2.0 kernel, the bug didn't occur. (6.2.0 doesn't contain david's splice_pages changes). I'm not familiar with "multipage bvecs" patch, which patch do you refer to? > David, strange that nvme-tcp is setting a single contiguous element bvec but it > is broken up into PAGE_SIZE increments in skb_splice_from_iter... > >> >> The patch introduces a helper sendpages_ok(), it returns true if all the >> continuous pages are sendable. >> >> Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use >> this helper to check whether the page list is OK. If the helper does not >> return true, the driver should remove MSG_SPLICE_PAGES flag. >> >> >> The bug is reproducible, in order to reproduce we need nvme-over-tcp >> controllers with optimal IO size bigger than PAGE_SIZE. Creating a raid >> with bitmap over those devices reproduces the bug. >> >> In order to simulate large optimal IO size you can use dm-stripe with a >> single device. >> Script to reproduce the issue on top of brd devices using dm-stripe is >> attached below. > > This is a great candidate for blktests. would be very beneficial to have it added there. Good idea, will do!
On 03/06/2024 10:18, Hannes Reinecke wrote: > On 5/30/24 15:26, Ofir Gal wrote: >> Network drivers are using sendpage_ok() to check the first page of an >> iterator in order to disable MSG_SPLICE_PAGES. The iterator can >> represent list of contiguous pages. >> >> When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used, >> it requires all pages in the iterator to be sendable. Therefore it needs >> to check that each page is sendable. >> >> The patch introduces a helper sendpages_ok(), it returns true if all the >> contiguous pages are sendable. >> >> Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use >> this helper to check whether the page list is OK. If the helper does not >> return true, the driver should remove MSG_SPLICE_PAGES flag. >> >> Signed-off-by: Ofir Gal <ofir.gal@volumez.com> >> --- >> include/linux/net.h | 20 ++++++++++++++++++++ >> 1 file changed, 20 insertions(+) >> >> diff --git a/include/linux/net.h b/include/linux/net.h >> index 688320b79fcc..b33bdc3e2031 100644 >> --- a/include/linux/net.h >> +++ b/include/linux/net.h >> @@ -322,6 +322,26 @@ static inline bool sendpage_ok(struct page *page) >> return !PageSlab(page) && page_count(page) >= 1; >> } >> +/* >> + * Check sendpage_ok on contiguous pages. >> + */ >> +static inline bool sendpages_ok(struct page *page, size_t len, size_t offset) >> +{ >> + unsigned int pagecount; >> + size_t page_offset; >> + int k; >> + >> + page = page + offset / PAGE_SIZE; >> + page_offset = offset % PAGE_SIZE; >> + pagecount = DIV_ROUND_UP(len + page_offset, PAGE_SIZE); >> + > Don't we miss the first page for offset > PAGE_SIZE? > I'd rather check for all pages from 'page' up to (offset + len), just > to be on the safe side. We do, I copied the logic from iov_iter_extract_bvec_pages() to be aligned with how skb_splice_from_iter() splits the pages. I don't think we need to check a page we won't send, but I don't mind to be on the safeside. >> + for (k = 0; k < pagecount; k++) >> + if (!sendpage_ok(page + k)) >> + return false; >> + >> + return true; >> +} >> + >> int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec, >> size_t num, size_t len); >> int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg, > > Cheers, > > Hannes
diff --git a/reproduce.sh b/reproduce.sh new file mode 100755 index 000000000..8ae226b18 --- /dev/null +++ b/reproduce.sh @@ -0,0 +1,114 @@ +#!/usr/bin/env sh +# SPDX-License-Identifier: MIT + +set -e + +load_modules() { + modprobe nvme + modprobe nvme-tcp + modprobe nvmet + modprobe nvmet-tcp +} + +setup_ns() { + local dev=$1 + local num=$2 + local port=$3 + ls $dev > /dev/null + + mkdir -p /sys/kernel/config/nvmet/subsystems/$num + cd /sys/kernel/config/nvmet/subsystems/$num + echo 1 > attr_allow_any_host + + mkdir -p namespaces/$num + cd namespaces/$num/ + echo $dev > device_path + echo 1 > enable + + ln -s /sys/kernel/config/nvmet/subsystems/$num \ + /sys/kernel/config/nvmet/ports/$port/subsystems/ +} + +setup_port() { + local num=$1 + + mkdir -p /sys/kernel/config/nvmet/ports/$num + cd /sys/kernel/config/nvmet/ports/$num + echo "127.0.0.1" > addr_traddr + echo tcp > addr_trtype + echo 8009 > addr_trsvcid + echo ipv4 > addr_adrfam +} + +setup_big_opt_io() { + local dev=$1 + local name=$2 + + # Change optimal IO size by creating dm stripe + dmsetup create $name --table \ + "0 `blockdev --getsz $dev` striped 1 512 $dev 0" +} + +setup_targets() { + # Setup ram devices instead of using real nvme devices + modprobe brd rd_size=1048576 rd_nr=2 # 1GiB + + setup_big_opt_io /dev/ram0 ram0_big_opt_io + setup_big_opt_io /dev/ram1 ram1_big_opt_io + + setup_port 1 + setup_ns /dev/mapper/ram0_big_opt_io 1 1 + setup_ns /dev/mapper/ram1_big_opt_io 2 1 +} + +setup_initiators() { + nvme connect -t tcp -n 1 -a 127.0.0.1 -s 8009 + nvme connect -t tcp -n 2 -a 127.0.0.1 -s 8009 +} + +reproduce_warn() { + local devs=$@ + + # Hangs here + mdadm --create /dev/md/test_md --level=1 --bitmap=internal \ + --bitmap-chunk=1024K --assume-clean --run --raid-devices=2 $devs +} + +echo "################################### + +The script creates 2 nvme initiators in order to reproduce the bug. +The script doesn't know which controllers it created, choose the new nvme +controllers when asked. + +################################### + +Press enter to continue. +" + +read tmp + +echo "# Creating 2 nvme controllers for the reproduction. current nvme devices:" +lsblk -s | grep nvme || true +echo "--------------------------------- +" + +load_modules +setup_targets +setup_initiators + +sleep 0.1 # Wait for the new nvme ctrls to show up + +echo "# Created 2 nvme devices. nvme devices list:" + +lsblk -s | grep nvme +echo "--------------------------------- +" + +echo "# Insert the new nvme devices as separated lines. both should be with size of 1G" +read dev1 +read dev2 + +ls /dev/$dev1 > /dev/null +ls /dev/$dev2 > /dev/null + +reproduce_warn /dev/$dev1 /dev/$dev2