From patchwork Tue May 30 14:16:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 687004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 117EBC7EE2C for ; Tue, 30 May 2023 14:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233116AbjE3OTK (ORCPT ); Tue, 30 May 2023 10:19:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233016AbjE3OSr (ORCPT ); Tue, 30 May 2023 10:18:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5832218D for ; Tue, 30 May 2023 07:17:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456227; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SrEG3Id6KDn/gSf5yv4D68QElxexhX3bro3X0CJ6lE8=; b=WzfNTn4OY4x4ikmCLQsjXr6Gq3b/uaTD2GZgJJgSyDdz0TsIGBHX9hpnt3KW3dKm+Nw1q+ bUCnjLjFwO7nwNdvEhreFx4NIS8RaBIK060vnVw+cQ6U6J/LgO414VnLQ/gcyTXIAaYyUx 6QnGWA5SuphhAZZOhjAn+4bgp1V9MAM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-662-GYtZWPrcPFKGze4E1bwesg-1; Tue, 30 May 2023 10:17:04 -0400 X-MC-Unique: GYtZWPrcPFKGze4E1bwesg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BAE3B3C0ED43; Tue, 30 May 2023 14:17:02 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 46E7917103; Tue, 30 May 2023 14:16:57 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH net-next v2 01/10] Drop the netfs_ prefix from netfs_extract_iter_to_sg() Date: Tue, 30 May 2023 15:16:25 +0100 Message-ID: <20230530141635.136968-2-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename netfs_extract_iter_to_sg() and its auxiliary functions to drop the netfs_ prefix. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jens Axboe cc: Herbert Xu cc: "Matthew Wilcox (Oracle)" cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-crypto@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org --- Notes: ver #2: - Put the "netfs_" prefix removal first to shorten lines and avoid checkpatch 80-char warnings. fs/cifs/smb2ops.c | 4 +-- fs/cifs/smbdirect.c | 2 +- fs/netfs/iterator.c | 66 +++++++++++++++++++++---------------------- include/linux/netfs.h | 6 ++-- 4 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 5065398665f1..2a0e0a7f009c 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -4334,8 +4334,8 @@ static void *smb2_get_aead_req(struct crypto_aead *tfm, struct smb_rqst *rqst, } sgtable.orig_nents = sgtable.nents; - rc = netfs_extract_iter_to_sg(iter, count, &sgtable, - num_sgs - sgtable.nents, 0); + rc = extract_iter_to_sg(iter, count, &sgtable, + num_sgs - sgtable.nents, 0); iov_iter_revert(iter, rc); sgtable.orig_nents = sgtable.nents; } diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 0362ebd4fa0f..223e17c16b60 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -2227,7 +2227,7 @@ static int smbd_iter_to_mr(struct smbd_connection *info, memset(sgt->sgl, 0, max_sg * sizeof(struct scatterlist)); - ret = netfs_extract_iter_to_sg(iter, iov_iter_count(iter), sgt, max_sg, 0); + ret = extract_iter_to_sg(iter, iov_iter_count(iter), sgt, max_sg, 0); WARN_ON(ret < 0); if (sgt->nents > 0) sg_mark_end(&sgt->sgl[sgt->nents - 1]); diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 8a4c86687429..f8eba3de1a97 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -106,11 +106,11 @@ EXPORT_SYMBOL_GPL(netfs_extract_user_iter); * Extract and pin a list of up to sg_max pages from UBUF- or IOVEC-class * iterators, and add them to the scatterlist. */ -static ssize_t netfs_extract_user_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_user_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { struct scatterlist *sg = sgtable->sgl + sgtable->nents; struct page **pages; @@ -159,11 +159,11 @@ static ssize_t netfs_extract_user_to_sg(struct iov_iter *iter, * Extract up to sg_max pages from a BVEC-type iterator and add them to the * scatterlist. The pages are not pinned. */ -static ssize_t netfs_extract_bvec_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_bvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { const struct bio_vec *bv = iter->bvec; struct scatterlist *sg = sgtable->sgl + sgtable->nents; @@ -205,11 +205,11 @@ static ssize_t netfs_extract_bvec_to_sg(struct iov_iter *iter, * scatterlist. This can deal with vmalloc'd buffers as well as kmalloc'd or * static buffers. The pages are not pinned. */ -static ssize_t netfs_extract_kvec_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_kvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { const struct kvec *kv = iter->kvec; struct scatterlist *sg = sgtable->sgl + sgtable->nents; @@ -266,11 +266,11 @@ static ssize_t netfs_extract_kvec_to_sg(struct iov_iter *iter, * Extract up to sg_max folios from an XARRAY-type iterator and add them to * the scatterlist. The pages are not pinned. */ -static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_xarray_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { struct scatterlist *sg = sgtable->sgl + sgtable->nents; struct xarray *xa = iter->xarray; @@ -312,7 +312,7 @@ static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, } /** - * netfs_extract_iter_to_sg - Extract pages from an iterator and add ot an sglist + * extract_iter_to_sg - Extract pages from an iterator and add ot an sglist * @iter: The iterator to extract from * @maxsize: The amount of iterator to copy * @sgtable: The scatterlist table to fill in @@ -339,9 +339,9 @@ static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, * The iov_iter_extract_mode() function should be used to query how cleanup * should be performed. */ -ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, - struct sg_table *sgtable, unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, + struct sg_table *sgtable, unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { if (maxsize == 0) return 0; @@ -349,21 +349,21 @@ ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, switch (iov_iter_type(iter)) { case ITER_UBUF: case ITER_IOVEC: - return netfs_extract_user_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_user_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_BVEC: - return netfs_extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_KVEC: - return netfs_extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_XARRAY: - return netfs_extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); default: pr_err("%s(%u) unsupported\n", __func__, iov_iter_type(iter)); WARN_ON_ONCE(1); return -EIO; } } -EXPORT_SYMBOL_GPL(netfs_extract_iter_to_sg); +EXPORT_SYMBOL_GPL(extract_iter_to_sg); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a1f3522daa69..55e201c3a841 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -301,9 +301,9 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, iov_iter_extraction_t extraction_flags); struct sg_table; -ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t len, - struct sg_table *sgtable, unsigned int sg_max, - iov_iter_extraction_t extraction_flags); +ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t len, + struct sg_table *sgtable, unsigned int sg_max, + iov_iter_extraction_t extraction_flags); /** * netfs_inode - Get the netfs inode context from the inode From patchwork Tue May 30 14:16:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 687003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D22AC7EE2F for ; Tue, 30 May 2023 14:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232821AbjE3OTe (ORCPT ); Tue, 30 May 2023 10:19:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233074AbjE3OS7 (ORCPT ); Tue, 30 May 2023 10:18:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A75AE19D for ; Tue, 30 May 2023 07:17:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456249; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8XAsj4uxJsU5nzHkgzshZ8+UJL6y54h/fBUZHPzhfR8=; b=e2yeO2q6XAptKku9CYwvPXXug6z0McoeUJAdUiTqKPp+fU0oc4S1JdjCczAkRYHMxp2naV 9+foIw3Zt/kCLOInt4I8R9+8tPgT7eTk5OSovs4fNYIdWKYxF4t0tMBJ7Jbl6vbUsLHQE0 IY8fPuMXrBuwQ79Jj8P6S3psWRvN8R4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-444-uIi8tBz6PMukM1ed44bcvg-1; Tue, 30 May 2023 10:17:26 -0400 X-MC-Unique: uIi8tBz6PMukM1ed44bcvg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 239E9803D78; Tue, 30 May 2023 14:17:13 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 230E640CFD46; Tue, 30 May 2023 14:17:08 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , Simon Horman , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH net-next v2 03/10] Wrap lines at 80 Date: Tue, 30 May 2023 15:16:27 +0100 Message-ID: <20230530141635.136968-4-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Wrap a line at 80 to stop checkpatch complaining. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jens Axboe cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Matthew Wilcox cc: Simon Horman cc: linux-crypto@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org --- fs/netfs/iterator.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index f41a37bca1e8..9f09dc30ceb6 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -119,7 +119,8 @@ static ssize_t extract_user_to_sg(struct iov_iter *iter, size_t len, off; /* We decant the page list into the tail of the scatterlist */ - pages = (void *)sgtable->sgl + array_size(sg_max, sizeof(struct scatterlist)); + pages = (void *)sgtable->sgl + + array_size(sg_max, sizeof(struct scatterlist)); pages -= sg_max; do { From patchwork Tue May 30 14:16:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 687001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EC90C7EE23 for ; Tue, 30 May 2023 14:20:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232233AbjE3OUR (ORCPT ); Tue, 30 May 2023 10:20:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233101AbjE3OTC (ORCPT ); Tue, 30 May 2023 10:19:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 437951AD for ; Tue, 30 May 2023 07:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S6Cj3Wp9DlUBqB3ssDU9rjwPMpP7CzOqqIN8IX2Zg2c=; b=TRxXAVfy3jURQZsZgJgN5WXsgDL8F9PCzwZm/CkSxIs+jmQcryJxon6Y8xIRUcBE36fw03 Fa80+WsG0E/pUcm4HmdD7RGnes3h+yovdzjEQycpV2MDXVHrhDnkKG4wH9drTec529JtmI JuU1AgVh3zygYLOjp2ZKBHF7hmG2GwM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-498-cs3AdqQuO9qpdCoxRFGrbA-1; Tue, 30 May 2023 10:17:36 -0400 X-MC-Unique: cs3AdqQuO9qpdCoxRFGrbA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B0582811E88; Tue, 30 May 2023 14:17:34 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24263C154D1; Tue, 30 May 2023 14:17:32 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 06/10] crypto: af_alg: Use extract_iter_to_sg() to create scatterlists Date: Tue, 30 May 2023 15:16:30 +0100 Message-ID: <20230530141635.136968-7-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use extract_iter_to_sg() to decant the destination iterator into a scatterlist in af_alg_get_rsgl(). af_alg_make_sg() can then be removed. Signed-off-by: David Howells cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org --- Notes: ver #2) - Fix some checkpatch warnings. crypto/af_alg.c | 57 +++++++++++------------------------------ crypto/algif_aead.c | 16 +++++++----- crypto/algif_hash.c | 18 +++++++++---- crypto/algif_skcipher.c | 2 +- include/crypto/if_alg.h | 6 ++--- 5 files changed, 40 insertions(+), 59 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index 7caff10df643..b8bf6d8525ba 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -531,45 +531,11 @@ static const struct net_proto_family alg_family = { .owner = THIS_MODULE, }; -int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len) -{ - struct page **pages = sgl->pages; - size_t off; - ssize_t n; - int npages, i; - - n = iov_iter_extract_pages(iter, &pages, len, ALG_MAX_PAGES, 0, &off); - if (n < 0) - return n; - - sgl->need_unpin = iov_iter_extract_will_pin(iter); - - npages = DIV_ROUND_UP(off + n, PAGE_SIZE); - if (WARN_ON(npages == 0)) - return -EINVAL; - /* Add one extra for linking */ - sg_init_table(sgl->sg, npages + 1); - - for (i = 0, len = n; i < npages; i++) { - int plen = min_t(int, len, PAGE_SIZE - off); - - sg_set_page(sgl->sg + i, sgl->pages[i], plen, off); - - off = 0; - len -= plen; - } - sg_mark_end(sgl->sg + npages - 1); - sgl->npages = npages; - - return n; -} -EXPORT_SYMBOL_GPL(af_alg_make_sg); - static void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct af_alg_sgl *sgl_new) { - sg_unmark_end(sgl_prev->sg + sgl_prev->npages - 1); - sg_chain(sgl_prev->sg, sgl_prev->npages + 1, sgl_new->sg); + sg_unmark_end(sgl_prev->sgt.sgl + sgl_prev->sgt.nents - 1); + sg_chain(sgl_prev->sgt.sgl, sgl_prev->sgt.nents + 1, sgl_new->sgt.sgl); } void af_alg_free_sg(struct af_alg_sgl *sgl) @@ -577,8 +543,8 @@ void af_alg_free_sg(struct af_alg_sgl *sgl) int i; if (sgl->need_unpin) - for (i = 0; i < sgl->npages; i++) - unpin_user_page(sgl->pages[i]); + for (i = 0; i < sgl->sgt.nents; i++) + unpin_user_page(sg_page(&sgl->sgt.sgl[i])); } EXPORT_SYMBOL_GPL(af_alg_free_sg); @@ -1292,8 +1258,8 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags, while (maxsize > len && msg_data_left(msg)) { struct af_alg_rsgl *rsgl; + ssize_t err; size_t seglen; - int err; /* limit the amount of readable buffers */ if (!af_alg_readable(sk)) @@ -1310,16 +1276,23 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags, return -ENOMEM; } - rsgl->sgl.npages = 0; + rsgl->sgl.sgt.sgl = rsgl->sgl.sgl; + rsgl->sgl.sgt.nents = 0; + rsgl->sgl.sgt.orig_nents = 0; list_add_tail(&rsgl->list, &areq->rsgl_list); - /* make one iovec available as scatterlist */ - err = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, seglen); + sg_init_table(rsgl->sgl.sgt.sgl, ALG_MAX_PAGES); + err = extract_iter_to_sg(&msg->msg_iter, seglen, &rsgl->sgl.sgt, + ALG_MAX_PAGES, 0); if (err < 0) { rsgl->sg_num_bytes = 0; return err; } + sg_mark_end(rsgl->sgl.sgt.sgl + rsgl->sgl.sgt.nents - 1); + rsgl->sgl.need_unpin = + iov_iter_extract_will_pin(&msg->msg_iter); + /* chain the new scatterlist with previous one */ if (areq->last_rsgl) af_alg_link_sg(&areq->last_rsgl->sgl, &rsgl->sgl); diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 42493b4d8ce4..829878025dba 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -210,7 +210,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, */ /* Use the RX SGL as source (and destination) for crypto op. */ - rsgl_src = areq->first_rsgl.sgl.sg; + rsgl_src = areq->first_rsgl.sgl.sgt.sgl; if (ctx->enc) { /* @@ -224,7 +224,8 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, * RX SGL: AAD || PT || Tag */ err = crypto_aead_copy_sgl(null_tfm, tsgl_src, - areq->first_rsgl.sgl.sg, processed); + areq->first_rsgl.sgl.sgt.sgl, + processed); if (err) goto free; af_alg_pull_tsgl(sk, processed, NULL, 0); @@ -242,7 +243,8 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, /* Copy AAD || CT to RX SGL buffer for in-place operation. */ err = crypto_aead_copy_sgl(null_tfm, tsgl_src, - areq->first_rsgl.sgl.sg, outlen); + areq->first_rsgl.sgl.sgt.sgl, + outlen); if (err) goto free; @@ -267,10 +269,10 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, if (usedpages) { /* RX SGL present */ struct af_alg_sgl *sgl_prev = &areq->last_rsgl->sgl; + struct scatterlist *sg = sgl_prev->sgt.sgl; - sg_unmark_end(sgl_prev->sg + sgl_prev->npages - 1); - sg_chain(sgl_prev->sg, sgl_prev->npages + 1, - areq->tsgl); + sg_unmark_end(sg + sgl_prev->sgt.nents - 1); + sg_chain(sg, sgl_prev->sgt.nents + 1, areq->tsgl); } else /* no RX SGL present (e.g. authentication only) */ rsgl_src = areq->tsgl; @@ -278,7 +280,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, /* Initialize the crypto operation */ aead_request_set_crypt(&areq->cra_u.aead_req, rsgl_src, - areq->first_rsgl.sgl.sg, used, ctx->iv); + areq->first_rsgl.sgl.sgt.sgl, used, ctx->iv); aead_request_set_ad(&areq->cra_u.aead_req, ctx->aead_assoclen); aead_request_set_tfm(&areq->cra_u.aead_req, tfm); diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c index 63af72e19fa8..16c69c4b9c62 100644 --- a/crypto/algif_hash.c +++ b/crypto/algif_hash.c @@ -91,13 +91,21 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg, if (len > limit) len = limit; - len = af_alg_make_sg(&ctx->sgl, &msg->msg_iter, len); + ctx->sgl.sgt.sgl = ctx->sgl.sgl; + ctx->sgl.sgt.nents = 0; + ctx->sgl.sgt.orig_nents = 0; + + len = extract_iter_to_sg(&msg->msg_iter, len, &ctx->sgl.sgt, + ALG_MAX_PAGES, 0); if (len < 0) { err = copied ? 0 : len; goto unlock; } + sg_mark_end(ctx->sgl.sgt.sgl + ctx->sgl.sgt.nents); + + ctx->sgl.need_unpin = iov_iter_extract_will_pin(&msg->msg_iter); - ahash_request_set_crypt(&ctx->req, ctx->sgl.sg, NULL, len); + ahash_request_set_crypt(&ctx->req, ctx->sgl.sgt.sgl, NULL, len); err = crypto_wait_req(crypto_ahash_update(&ctx->req), &ctx->wait); @@ -141,8 +149,8 @@ static ssize_t hash_sendpage(struct socket *sock, struct page *page, flags |= MSG_MORE; lock_sock(sk); - sg_init_table(ctx->sgl.sg, 1); - sg_set_page(ctx->sgl.sg, page, size, offset); + sg_init_table(ctx->sgl.sgl, 1); + sg_set_page(ctx->sgl.sgl, page, size, offset); if (!(flags & MSG_MORE)) { err = hash_alloc_result(sk, ctx); @@ -151,7 +159,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct page *page, } else if (!ctx->more) hash_free_result(sk, ctx); - ahash_request_set_crypt(&ctx->req, ctx->sgl.sg, ctx->result, size); + ahash_request_set_crypt(&ctx->req, ctx->sgl.sgl, ctx->result, size); if (!(flags & MSG_MORE)) { if (ctx->more) diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index ee8890ee8f33..a251cd6bd5b9 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c @@ -105,7 +105,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, /* Initialize the crypto operation */ skcipher_request_set_tfm(&areq->cra_u.skcipher_req, tfm); skcipher_request_set_crypt(&areq->cra_u.skcipher_req, areq->tsgl, - areq->first_rsgl.sgl.sg, len, ctx->iv); + areq->first_rsgl.sgl.sgt.sgl, len, ctx->iv); if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) { /* AIO operation */ diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h index 46494b33f5bc..34224e77f5a2 100644 --- a/include/crypto/if_alg.h +++ b/include/crypto/if_alg.h @@ -56,9 +56,8 @@ struct af_alg_type { }; struct af_alg_sgl { - struct scatterlist sg[ALG_MAX_PAGES + 1]; - struct page *pages[ALG_MAX_PAGES]; - unsigned int npages; + struct sg_table sgt; + struct scatterlist sgl[ALG_MAX_PAGES + 1]; bool need_unpin; }; @@ -164,7 +163,6 @@ int af_alg_release(struct socket *sock); void af_alg_release_parent(struct sock *sk); int af_alg_accept(struct sock *sk, struct socket *newsock, bool kern); -int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len); void af_alg_free_sg(struct af_alg_sgl *sgl); static inline struct alg_sock *alg_sk(struct sock *sk) From patchwork Tue May 30 14:16:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 687002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2097C7EE2C for ; Tue, 30 May 2023 14:20:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229536AbjE3OUQ (ORCPT ); Tue, 30 May 2023 10:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233105AbjE3OTC (ORCPT ); Tue, 30 May 2023 10:19:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B57EEA for ; Tue, 30 May 2023 07:17:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jD4W/W7iXB9ATeRl2QuO5GnCDoE2m4ZXOMCYUAgCXGY=; b=PN/Z6CrY2Ek+1z3iIhu4OlTKbBhsKAvA7lLP6SOFcl1raf3zklznhrlyeJsUZoARBg+kPI yS3OAcGE3+IKCpcjkHUlI0tEosYwehXH6xfQxpNhxrAGTBKaUy4f9mqY6blKRXPnI3nyEK qQOgJxFuHMd/DJZ0zS6rrE27JQWJyxc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-210-n8AGTfiBPVSP1SnUgycr9w-1; Tue, 30 May 2023 10:17:38 -0400 X-MC-Unique: n8AGTfiBPVSP1SnUgycr9w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EF71E38154EB; Tue, 30 May 2023 14:17:37 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id A08D1140E955; Tue, 30 May 2023 14:17:35 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 07/10] crypto: af_alg: Indent the loop in af_alg_sendmsg() Date: Tue, 30 May 2023 15:16:31 +0100 Message-ID: <20230530141635.136968-8-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Put the loop in af_alg_sendmsg() into an if-statement to indent it to make the next patch easier to review as that will add another branch to handle MSG_SPLICE_PAGES to the if-statement. Signed-off-by: David Howells cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org --- Notes: ver #2) - Fix a checkpatch warning. crypto/af_alg.c | 51 ++++++++++++++++++++++++++----------------------- 1 file changed, 27 insertions(+), 24 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index b8bf6d8525ba..fd56ccff6fed 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -1030,35 +1030,38 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, if (sgl->cur) sg_unmark_end(sg + sgl->cur - 1); - do { - struct page *pg; - unsigned int i = sgl->cur; + if (1 /* TODO check MSG_SPLICE_PAGES */) { + do { + struct page *pg; + unsigned int i = sgl->cur; - plen = min_t(size_t, len, PAGE_SIZE); + plen = min_t(size_t, len, PAGE_SIZE); - pg = alloc_page(GFP_KERNEL); - if (!pg) { - err = -ENOMEM; - goto unlock; - } + pg = alloc_page(GFP_KERNEL); + if (!pg) { + err = -ENOMEM; + goto unlock; + } - sg_assign_page(sg + i, pg); + sg_assign_page(sg + i, pg); - err = memcpy_from_msg(page_address(sg_page(sg + i)), - msg, plen); - if (err) { - __free_page(sg_page(sg + i)); - sg_assign_page(sg + i, NULL); - goto unlock; - } + err = memcpy_from_msg( + page_address(sg_page(sg + i)), + msg, plen); + if (err) { + __free_page(sg_page(sg + i)); + sg_assign_page(sg + i, NULL); + goto unlock; + } - sg[i].length = plen; - len -= plen; - ctx->used += plen; - copied += plen; - size -= plen; - sgl->cur++; - } while (len && sgl->cur < MAX_SGL_ENTS); + sg[i].length = plen; + len -= plen; + ctx->used += plen; + copied += plen; + size -= plen; + sgl->cur++; + } while (len && sgl->cur < MAX_SGL_ENTS); + } if (!size) sg_mark_end(sg + sgl->cur - 1); From patchwork Tue May 30 14:16:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 687000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E770BC7EE45 for ; Tue, 30 May 2023 14:20:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231553AbjE3OUT (ORCPT ); Tue, 30 May 2023 10:20:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233089AbjE3OTC (ORCPT ); Tue, 30 May 2023 10:19:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B23991B8 for ; Tue, 30 May 2023 07:17:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456269; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PCkracVZHXcjf1udYjohuqRBAaoXTMR0EtKaMQxdAMM=; b=OOMDp9Xzss/iwwTJjV78HOF6WKmGbOtZ8SjTJ+xG5UmFOtRzjNrSvUkx6xVVc2Y8/EQ1Nj TzjWKHBAh+iY72Q/LALIqSWpvjruExjkyHJ/JuGzFRpzUaPLKIvudR5AneIOsBHOTDh9iP AuzU9Bitms+6DqvO16XWFWBNxa1JBC4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-557-ULW1DDxcMK6oR9kRngPuvA-1; Tue, 30 May 2023 10:17:43 -0400 X-MC-Unique: ULW1DDxcMK6oR9kRngPuvA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E0FC858F0F; Tue, 30 May 2023 14:17:42 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id A549CC15612; Tue, 30 May 2023 14:17:39 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 08/10] crypto: af_alg: Support MSG_SPLICE_PAGES Date: Tue, 30 May 2023 15:16:32 +0100 Message-ID: <20230530141635.136968-9-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Make AF_ALG sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org --- crypto/af_alg.c | 28 ++++++++++++++++++++++++++-- crypto/algif_aead.c | 22 +++++++++++----------- crypto/algif_skcipher.c | 8 ++++---- 3 files changed, 41 insertions(+), 17 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index fd56ccff6fed..62f4205d42e3 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -940,6 +940,10 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, bool init = false; int err = 0; + if ((msg->msg_flags & MSG_SPLICE_PAGES) && + !iov_iter_is_bvec(&msg->msg_iter)) + return -EINVAL; + if (msg->msg_controllen) { err = af_alg_cmsg_send(msg, &con); if (err) @@ -985,7 +989,7 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, while (size) { struct scatterlist *sg; size_t len = size; - size_t plen; + ssize_t plen; /* use the existing memory in an allocated page */ if (ctx->merge) { @@ -1030,7 +1034,27 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, if (sgl->cur) sg_unmark_end(sg + sgl->cur - 1); - if (1 /* TODO check MSG_SPLICE_PAGES */) { + if (msg->msg_flags & MSG_SPLICE_PAGES) { + struct sg_table sgtable = { + .sgl = sg, + .nents = sgl->cur, + .orig_nents = sgl->cur, + }; + + plen = extract_iter_to_sg(&msg->msg_iter, len, &sgtable, + MAX_SGL_ENTS, 0); + if (plen < 0) { + err = plen; + goto unlock; + } + + for (; sgl->cur < sgtable.nents; sgl->cur++) + get_page(sg_page(&sg[sgl->cur])); + len -= plen; + ctx->used += plen; + copied += plen; + size -= plen; + } else { do { struct page *pg; unsigned int i = sgl->cur; diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 829878025dba..35bfa283748d 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -9,8 +9,8 @@ * The following concept of the memory management is used: * * The kernel maintains two SGLs, the TX SGL and the RX SGL. The TX SGL is - * filled by user space with the data submitted via sendpage/sendmsg. Filling - * up the TX SGL does not cause a crypto operation -- the data will only be + * filled by user space with the data submitted via sendpage. Filling up + * the TX SGL does not cause a crypto operation -- the data will only be * tracked by the kernel. Upon receipt of one recvmsg call, the caller must * provide a buffer which is tracked with the RX SGL. * @@ -113,19 +113,19 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, } /* - * Data length provided by caller via sendmsg/sendpage that has not - * yet been processed. + * Data length provided by caller via sendmsg that has not yet been + * processed. */ used = ctx->used; /* - * Make sure sufficient data is present -- note, the same check is - * also present in sendmsg/sendpage. The checks in sendpage/sendmsg - * shall provide an information to the data sender that something is - * wrong, but they are irrelevant to maintain the kernel integrity. - * We need this check here too in case user space decides to not honor - * the error message in sendmsg/sendpage and still call recvmsg. This - * check here protects the kernel integrity. + * Make sure sufficient data is present -- note, the same check is also + * present in sendmsg. The checks in sendmsg shall provide an + * information to the data sender that something is wrong, but they are + * irrelevant to maintain the kernel integrity. We need this check + * here too in case user space decides to not honor the error message + * in sendmsg and still call recvmsg. This check here protects the + * kernel integrity. */ if (!aead_sufficient_data(sk)) return -EINVAL; diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index a251cd6bd5b9..b1f321b9f846 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c @@ -9,10 +9,10 @@ * The following concept of the memory management is used: * * The kernel maintains two SGLs, the TX SGL and the RX SGL. The TX SGL is - * filled by user space with the data submitted via sendpage/sendmsg. Filling - * up the TX SGL does not cause a crypto operation -- the data will only be - * tracked by the kernel. Upon receipt of one recvmsg call, the caller must - * provide a buffer which is tracked with the RX SGL. + * filled by user space with the data submitted via sendmsg. Filling up the TX + * SGL does not cause a crypto operation -- the data will only be tracked by + * the kernel. Upon receipt of one recvmsg call, the caller must provide a + * buffer which is tracked with the RX SGL. * * During the processing of the recvmsg operation, the cipher request is * allocated and prepared. As part of the recvmsg operation, the processed