mbox series

[v3,0/2] libceph: add new iov_iter msg_data type and use it for reads

Message ID 20220701103013.12902-1-jlayton@kernel.org
Headers show
Series libceph: add new iov_iter msg_data type and use it for reads | expand

Message

Jeff Layton July 1, 2022, 10:30 a.m. UTC
v3:
- flesh out kerneldoc header over osd_req_op_extent_osd_data_pages
- remove export of ceph_msg_data_add_iter

v2:
- make _next handler advance the iterator in preparation for coming
  changes to iov_iter_get_pages

Just a respin to address some minor nits pointed out by Xiubo.

------------------------8<-------------------------

This patchset was inspired by some earlier work that David Howells did
to add a similar type.

Currently, we take an iov_iter from the netfs layer, turn that into an
array of pages, and then pass that to the messenger which eventually
turns that back into an iov_iter before handing it back to the socket.

This patchset adds a new ceph_msg_data_type that uses an iov_iter
directly instead of requiring an array of pages or bvecs. This allows
us to avoid an extra allocation in the buffered read path, and should
make it easier to plumb in write helpers later.

For now, this is still just a slow, stupid implementation that hands
the socket layer a page at a time like the existing messenger does. It
doesn't yet attempt to pass through the iov_iter directly.

I have some patches that pass the cursor's iov_iter directly to the
socket in the receive path, but it requires some infrastructure that's
not in mainline yet (iov_iter_scan(), for instance). It should be
possible to something similar in the send path as well.

Jeff Layton (2):
  libceph: add new iov_iter-based ceph_msg_data_type and
    ceph_osd_data_type
  ceph: use osd_req_op_extent_osd_iter for netfs reads

 fs/ceph/addr.c                  | 18 +------
 include/linux/ceph/messenger.h  |  8 ++++
 include/linux/ceph/osd_client.h |  4 ++
 net/ceph/messenger.c            | 84 +++++++++++++++++++++++++++++++++
 net/ceph/osd_client.c           | 27 +++++++++++
 5 files changed, 124 insertions(+), 17 deletions(-)

Comments

Xiubo Li July 4, 2022, 1:55 a.m. UTC | #1
On 7/1/22 6:30 PM, Jeff Layton wrote:
> v3:
> - flesh out kerneldoc header over osd_req_op_extent_osd_data_pages
> - remove export of ceph_msg_data_add_iter
>
> v2:
> - make _next handler advance the iterator in preparation for coming
>    changes to iov_iter_get_pages
>
> Just a respin to address some minor nits pointed out by Xiubo.
>
> ------------------------8<-------------------------
>
> This patchset was inspired by some earlier work that David Howells did
> to add a similar type.
>
> Currently, we take an iov_iter from the netfs layer, turn that into an
> array of pages, and then pass that to the messenger which eventually
> turns that back into an iov_iter before handing it back to the socket.
>
> This patchset adds a new ceph_msg_data_type that uses an iov_iter
> directly instead of requiring an array of pages or bvecs. This allows
> us to avoid an extra allocation in the buffered read path, and should
> make it easier to plumb in write helpers later.
>
> For now, this is still just a slow, stupid implementation that hands
> the socket layer a page at a time like the existing messenger does. It
> doesn't yet attempt to pass through the iov_iter directly.
>
> I have some patches that pass the cursor's iov_iter directly to the
> socket in the receive path, but it requires some infrastructure that's
> not in mainline yet (iov_iter_scan(), for instance). It should be
> possible to something similar in the send path as well.
>
> Jeff Layton (2):
>    libceph: add new iov_iter-based ceph_msg_data_type and
>      ceph_osd_data_type
>    ceph: use osd_req_op_extent_osd_iter for netfs reads
>
>   fs/ceph/addr.c                  | 18 +------
>   include/linux/ceph/messenger.h  |  8 ++++
>   include/linux/ceph/osd_client.h |  4 ++
>   net/ceph/messenger.c            | 84 +++++++++++++++++++++++++++++++++
>   net/ceph/osd_client.c           | 27 +++++++++++
>   5 files changed, 124 insertions(+), 17 deletions(-)
>
Merged into the testing branch and will run the tests.

Thanks Jeff!

-- Xiubo
Christoph Hellwig July 4, 2022, 5:56 a.m. UTC | #2
On Fri, Jul 01, 2022 at 06:30:11AM -0400, Jeff Layton wrote:
> Currently, we take an iov_iter from the netfs layer, turn that into an
> array of pages, and then pass that to the messenger which eventually
> turns that back into an iov_iter before handing it back to the socket.
> 
> This patchset adds a new ceph_msg_data_type that uses an iov_iter
> directly instead of requiring an array of pages or bvecs. This allows
> us to avoid an extra allocation in the buffered read path, and should
> make it easier to plumb in write helpers later.
> 
> For now, this is still just a slow, stupid implementation that hands
> the socket layer a page at a time like the existing messenger does. It
> doesn't yet attempt to pass through the iov_iter directly.
> 
> I have some patches that pass the cursor's iov_iter directly to the
> socket in the receive path, but it requires some infrastructure that's
> not in mainline yet (iov_iter_scan(), for instance). It should be
> possible to something similar in the send path as well.

Btw, is there any good reason to not simply replace ceph_msg_data
with an iov_iter entirely?
Jeff Layton July 5, 2022, 12:59 p.m. UTC | #3
On Sun, 2022-07-03 at 22:56 -0700, Christoph Hellwig wrote:
> On Fri, Jul 01, 2022 at 06:30:11AM -0400, Jeff Layton wrote:
> > Currently, we take an iov_iter from the netfs layer, turn that into an
> > array of pages, and then pass that to the messenger which eventually
> > turns that back into an iov_iter before handing it back to the socket.
> > 
> > This patchset adds a new ceph_msg_data_type that uses an iov_iter
> > directly instead of requiring an array of pages or bvecs. This allows
> > us to avoid an extra allocation in the buffered read path, and should
> > make it easier to plumb in write helpers later.
> > 
> > For now, this is still just a slow, stupid implementation that hands
> > the socket layer a page at a time like the existing messenger does. It
> > doesn't yet attempt to pass through the iov_iter directly.
> > 
> > I have some patches that pass the cursor's iov_iter directly to the
> > socket in the receive path, but it requires some infrastructure that's
> > not in mainline yet (iov_iter_scan(), for instance). It should be
> > possible to something similar in the send path as well.
> 
> Btw, is there any good reason to not simply replace ceph_msg_data
> with an iov_iter entirely?
> 

Not really, no.

What I'd probably do is change the existing osd_req_op_* callers to use
the new iov_iter msg_data type first, and then once they all were you
could phase out the use of struct ceph_msg_data altogether.