diff mbox series

iov_iter: Four fixes for ITER_XARRAY

Message ID 3545034.1619392490@warthog.procyon.org.uk
State New
Headers show
Series iov_iter: Four fixes for ITER_XARRAY | expand

Commit Message

David Howells April 25, 2021, 11:14 p.m. UTC
Hi Al,

I think this patch should include all the fixes necessary.  I could merge
it in, but I think it might be better to tag it on the end as an additional
patch.

David
---
iov_iter: Four fixes for ITER_XARRAY

Fix four things[1] in the patch that adds ITER_XARRAY[2]:

 (1) Remove the address_space struct predeclaration.  This is a holdover
     from when it was ITER_MAPPING.

 (2) Fix _copy_mc_to_iter() so that the xarray segment updates count and
     iov_offset in the iterator before returning.

 (3) Fix iov_iter_alignment() to not loop in the xarray case.  Because the
     middle pages are all whole pages, only the end pages need be
     considered - and this can be reduced to just looking at the start
     position in the xarray and the iteration size.

 (4) Fix iov_iter_advance() to limit the size of the advance to no more
     than the remaining iteration size.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/YIVrJT8GwLI0Wlgx@zeniv-ca.linux.org.uk [1]
Link: https://lore.kernel.org/r/161918448151.3145707.11541538916600921083.stgit@warthog.procyon.org.uk [2]
---
 include/linux/uio.h |    1 -
 lib/iov_iter.c      |    5 +++++
 2 files changed, 5 insertions(+), 1 deletion(-)

Comments

Al Viro April 26, 2021, 5:14 p.m. UTC | #1
On Mon, Apr 26, 2021 at 12:14:50AM +0100, David Howells wrote:
> Hi Al,

> 

> I think this patch should include all the fixes necessary.  I could merge

> it in, but I think it might be better to tag it on the end as an additional

> patch.


Looks sane, but I wonder if it would've been better to deal with this

> @@ -791,6 +791,8 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)

>  			curr_addr = (unsigned long) from;

>  			bytes = curr_addr - s_addr - rem;

>  			rcu_read_unlock();

> +			i->iov_offset += bytes;

> +			i->count -= bytes;

>  			return bytes;

>  		}

>  		})


by having your iterator check the return value of X callback and, having
decremented .bv_len by return value, broke out of the loop.

       __label__ __bugger_off;

       xas_for_each(&xas, head, ULONG_MAX) {                           \
               if (xas_retry(&xas, head))                              \
                       continue;                                       \
               if (WARN_ON(xa_is_value(head)))                         \
                       break;                                          \
               if (WARN_ON(PageHuge(head)))                            \
                       break;                                          \
               for (j = (head->index < index) ? index - head->index : 0; \
                    j < thp_nr_pages(head); j++) {                     \
                       __v.bv_page = head + j;                         \

			size_t left;

                       offset = (i->xarray_start + skip) & ~PAGE_MASK; \
                       seg = PAGE_SIZE - offset;                       \
                       __v.bv_offset = offset;                         \
                       __v.bv_len = min(n, seg);                       \

                       left = (STEP);
		       __v.bv_len -= left;

                       n -= __v.bv_len;                                \
                       skip += __v.bv_len;                             \

		       if (!n || left)
				goto __bugger_off;

               }                                                       \
               if (n == 0)                                             \
                       break;                                          \
       }                                                       \

__bugger_off:


Then rename iterate_and_advance() to __iterate_and_advance() and have
#define iterate_and_advance(....., X) __iterate_and_advance(....., ((void)(X),0))
with iterate_all_kinds() using iterate_xarray(....,((void)(X),0)

Then _copy_mc_to_iter() could use __iterate_and_advance(), getting rid of
the need of doing anything special in case of short copy.  OTOH, I can do
that myself in a followup - not a problem.
Jeff Layton April 26, 2021, 7:02 p.m. UTC | #2
On Mon, 2021-04-26 at 00:14 +0100, David Howells wrote:
> Hi Al,

> 

> I think this patch should include all the fixes necessary.  I could merge

> it in, but I think it might be better to tag it on the end as an additional

> patch.

> 

> David

> ---

> iov_iter: Four fixes for ITER_XARRAY

> 

> Fix four things[1] in the patch that adds ITER_XARRAY[2]:

> 

>  (1) Remove the address_space struct predeclaration.  This is a holdover

>      from when it was ITER_MAPPING.

> 

>  (2) Fix _copy_mc_to_iter() so that the xarray segment updates count and

>      iov_offset in the iterator before returning.

> 

>  (3) Fix iov_iter_alignment() to not loop in the xarray case.  Because the

>      middle pages are all whole pages, only the end pages need be

>      considered - and this can be reduced to just looking at the start

>      position in the xarray and the iteration size.

> 

>  (4) Fix iov_iter_advance() to limit the size of the advance to no more

>      than the remaining iteration size.

> 

> Reported-by: Al Viro <viro@zeniv.linux.org.uk>

> Signed-off-by: David Howells <dhowells@redhat.com>

> Link: https://lore.kernel.org/r/YIVrJT8GwLI0Wlgx@zeniv-ca.linux.org.uk [1]

> Link: https://lore.kernel.org/r/161918448151.3145707.11541538916600921083.stgit@warthog.procyon.org.uk [2]

> ---

>  include/linux/uio.h |    1 -

>  lib/iov_iter.c      |    5 +++++

>  2 files changed, 5 insertions(+), 1 deletion(-)

> 

> diff --git a/include/linux/uio.h b/include/linux/uio.h

> index 5f5ffc45d4aa..d3ec87706d75 100644

> --- a/include/linux/uio.h

> +++ b/include/linux/uio.h

> @@ -10,7 +10,6 @@

>  #include <uapi/linux/uio.h>

>  

> 

> 

> 

>  struct page;

> -struct address_space;

>  struct pipe_inode_info;

>  

> 

> 

> 

>  struct kvec {

> diff --git a/lib/iov_iter.c b/lib/iov_iter.c

> index 44fa726a8323..61228a6c69f8 100644

> --- a/lib/iov_iter.c

> +++ b/lib/iov_iter.c

> @@ -791,6 +791,8 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)

>  			curr_addr = (unsigned long) from;

>  			bytes = curr_addr - s_addr - rem;

>  			rcu_read_unlock();

> +			i->iov_offset += bytes;

> +			i->count -= bytes;

>  			return bytes;

>  		}

>  		})

> @@ -1147,6 +1149,7 @@ void iov_iter_advance(struct iov_iter *i, size_t size)

>  		return;

>  	}

>  	if (unlikely(iov_iter_is_xarray(i))) {

> +		size = min(size, i->count);

>  		i->iov_offset += size;

>  		i->count -= size;

>  		return;

> @@ -1346,6 +1349,8 @@ unsigned long iov_iter_alignment(const struct iov_iter *i)

>  			return size | i->iov_offset;

>  		return size;

>  	}

> +	if (unlikely(iov_iter_is_xarray(i)))

> +		return (i->xarray_start + i->iov_offset) | i->count;

>  	iterate_all_kinds(i, size, v,

>  		(res |= (unsigned long)v.iov_base | v.iov_len, 0),

>  		res |= v.bv_offset | v.bv_len,

> 


I did a test run with your v7 pile, this patch, and my ceph fscache
rework patches and it did fine. You can add:

Tested-by: Jeff Layton <jlayton@redhat.com>
David Wysochanski April 26, 2021, 9:38 p.m. UTC | #3
On Sun, Apr 25, 2021 at 7:15 PM David Howells <dhowells@redhat.com> wrote:
>

> Hi Al,

>

> I think this patch should include all the fixes necessary.  I could merge

> it in, but I think it might be better to tag it on the end as an additional

> patch.

>

> David

> ---

> iov_iter: Four fixes for ITER_XARRAY

>

> Fix four things[1] in the patch that adds ITER_XARRAY[2]:

>

>  (1) Remove the address_space struct predeclaration.  This is a holdover

>      from when it was ITER_MAPPING.

>

>  (2) Fix _copy_mc_to_iter() so that the xarray segment updates count and

>      iov_offset in the iterator before returning.

>

>  (3) Fix iov_iter_alignment() to not loop in the xarray case.  Because the

>      middle pages are all whole pages, only the end pages need be

>      considered - and this can be reduced to just looking at the start

>      position in the xarray and the iteration size.

>

>  (4) Fix iov_iter_advance() to limit the size of the advance to no more

>      than the remaining iteration size.

>

> Reported-by: Al Viro <viro@zeniv.linux.org.uk>

> Signed-off-by: David Howells <dhowells@redhat.com>

> Link: https://lore.kernel.org/r/YIVrJT8GwLI0Wlgx@zeniv-ca.linux.org.uk [1]

> Link: https://lore.kernel.org/r/161918448151.3145707.11541538916600921083.stgit@warthog.procyon.org.uk [2]

> ---

>  include/linux/uio.h |    1 -

>  lib/iov_iter.c      |    5 +++++

>  2 files changed, 5 insertions(+), 1 deletion(-)

>

> diff --git a/include/linux/uio.h b/include/linux/uio.h

> index 5f5ffc45d4aa..d3ec87706d75 100644

> --- a/include/linux/uio.h

> +++ b/include/linux/uio.h

> @@ -10,7 +10,6 @@

>  #include <uapi/linux/uio.h>

>

>  struct page;

> -struct address_space;

>  struct pipe_inode_info;

>

>  struct kvec {

> diff --git a/lib/iov_iter.c b/lib/iov_iter.c

> index 44fa726a8323..61228a6c69f8 100644

> --- a/lib/iov_iter.c

> +++ b/lib/iov_iter.c

> @@ -791,6 +791,8 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)

>                         curr_addr = (unsigned long) from;

>                         bytes = curr_addr - s_addr - rem;

>                         rcu_read_unlock();

> +                       i->iov_offset += bytes;

> +                       i->count -= bytes;

>                         return bytes;

>                 }

>                 })

> @@ -1147,6 +1149,7 @@ void iov_iter_advance(struct iov_iter *i, size_t size)

>                 return;

>         }

>         if (unlikely(iov_iter_is_xarray(i))) {

> +               size = min(size, i->count);

>                 i->iov_offset += size;

>                 i->count -= size;

>                 return;

> @@ -1346,6 +1349,8 @@ unsigned long iov_iter_alignment(const struct iov_iter *i)

>                         return size | i->iov_offset;

>                 return size;

>         }

> +       if (unlikely(iov_iter_is_xarray(i)))

> +               return (i->xarray_start + i->iov_offset) | i->count;

>         iterate_all_kinds(i, size, v,

>                 (res |= (unsigned long)v.iov_base | v.iov_len, 0),

>                 res |= v.bv_offset | v.bv_len,

>


You can add
Tested-by: Dave Wysochanski <dwysocha@redhat.com>


I added this patch on top of your v7 series then added my current
NFS patches to use netfs lib.
I ran xfstests with fscache enabled on NFS versions (3, 4.0, 4.1, 4.2),
as well as connectathon and some unit tests.
diff mbox series

Patch

diff --git a/include/linux/uio.h b/include/linux/uio.h
index 5f5ffc45d4aa..d3ec87706d75 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -10,7 +10,6 @@ 
 #include <uapi/linux/uio.h>
 
 struct page;
-struct address_space;
 struct pipe_inode_info;
 
 struct kvec {
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 44fa726a8323..61228a6c69f8 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -791,6 +791,8 @@  size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
 			curr_addr = (unsigned long) from;
 			bytes = curr_addr - s_addr - rem;
 			rcu_read_unlock();
+			i->iov_offset += bytes;
+			i->count -= bytes;
 			return bytes;
 		}
 		})
@@ -1147,6 +1149,7 @@  void iov_iter_advance(struct iov_iter *i, size_t size)
 		return;
 	}
 	if (unlikely(iov_iter_is_xarray(i))) {
+		size = min(size, i->count);
 		i->iov_offset += size;
 		i->count -= size;
 		return;
@@ -1346,6 +1349,8 @@  unsigned long iov_iter_alignment(const struct iov_iter *i)
 			return size | i->iov_offset;
 		return size;
 	}
+	if (unlikely(iov_iter_is_xarray(i)))
+		return (i->xarray_start + i->iov_offset) | i->count;
 	iterate_all_kinds(i, size, v,
 		(res |= (unsigned long)v.iov_base | v.iov_len, 0),
 		res |= v.bv_offset | v.bv_len,