Message ID | 20231029045839.154071-1-ebiggers@kernel.org |
---|---|
State | New |
Headers | show |
Series | RDMA/siw: use crypto_shash_digest() in siw_qp_prepare_tx() | expand |
> -----Original Message----- > From: Eric Biggers <ebiggers@kernel.org> > Sent: Sunday, October 29, 2023 5:59 AM > To: Bernard Metzler <BMT@zurich.ibm.com>; Jason Gunthorpe <jgg@ziepe.ca>; > Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org > Cc: linux-crypto@vger.kernel.org > Subject: [EXTERNAL] [PATCH] RDMA/siw: use crypto_shash_digest() in > siw_qp_prepare_tx() > > From: Eric Biggers <ebiggers@google.com> > > Simplify siw_qp_prepare_tx() by using crypto_shash_digest() instead of > an init+update+final sequence. This should also improve performance. > > Signed-off-by: Eric Biggers <ebiggers@google.com> > --- > drivers/infiniband/sw/siw/siw_qp_tx.c | 12 ++++-------- > 1 file changed, 4 insertions(+), 8 deletions(-) > > diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c > b/drivers/infiniband/sw/siw/siw_qp_tx.c > index 60b6a4135961..5b390f08f1cd 100644 > --- a/drivers/infiniband/sw/siw/siw_qp_tx.c > +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c > @@ -242,28 +242,24 @@ static int siw_qp_prepare_tx(struct siw_iwarp_tx > *c_tx) > c_tx->pkt.c_untagged.ddp_mo = 0; > else > c_tx->pkt.c_tagged.ddp_to = > cpu_to_be64(wqe->sqe.raddr); > } > > *(u32 *)crc = 0; > /* > * Do complete CRC if enabled and short packet > */ > - if (c_tx->mpa_crc_hd) { > - crypto_shash_init(c_tx->mpa_crc_hd); > - if (crypto_shash_update(c_tx->mpa_crc_hd, > - (u8 *)&c_tx->pkt, > - c_tx->ctrl_len)) > - return -EINVAL; > - crypto_shash_final(c_tx->mpa_crc_hd, (u8 *)crc); > - } > + if (c_tx->mpa_crc_hd && > + crypto_shash_digest(c_tx->mpa_crc_hd, (u8 *)&c_tx->pkt, > + c_tx->ctrl_len, (u8 *)crc) != 0) > + return -EINVAL; > c_tx->ctrl_len += MPA_CRC_SIZE; > > return PKT_COMPLETE; > } > c_tx->ctrl_len += MPA_CRC_SIZE; > c_tx->sge_idx = 0; > c_tx->sge_off = 0; > c_tx->pbl_idx = 0; > > /* > > base-commit: 2af9b20dbb39f6ebf9b9b6c090271594627d818e > -- > 2.42.0 Thank you Eric, looks good to me! Acked-by: Bernard Metzler <bmt@zurich.ibm.com>
On Sat, 28 Oct 2023 21:58:39 -0700, Eric Biggers wrote: > Simplify siw_qp_prepare_tx() by using crypto_shash_digest() instead of > an init+update+final sequence. This should also improve performance. > > Applied, thanks! [1/1] RDMA/siw: use crypto_shash_digest() in siw_qp_prepare_tx() https://git.kernel.org/rdma/rdma/c/9aac6c05a56289 Best regards,
diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c index 60b6a4135961..5b390f08f1cd 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -242,28 +242,24 @@ static int siw_qp_prepare_tx(struct siw_iwarp_tx *c_tx) c_tx->pkt.c_untagged.ddp_mo = 0; else c_tx->pkt.c_tagged.ddp_to = cpu_to_be64(wqe->sqe.raddr); } *(u32 *)crc = 0; /* * Do complete CRC if enabled and short packet */ - if (c_tx->mpa_crc_hd) { - crypto_shash_init(c_tx->mpa_crc_hd); - if (crypto_shash_update(c_tx->mpa_crc_hd, - (u8 *)&c_tx->pkt, - c_tx->ctrl_len)) - return -EINVAL; - crypto_shash_final(c_tx->mpa_crc_hd, (u8 *)crc); - } + if (c_tx->mpa_crc_hd && + crypto_shash_digest(c_tx->mpa_crc_hd, (u8 *)&c_tx->pkt, + c_tx->ctrl_len, (u8 *)crc) != 0) + return -EINVAL; c_tx->ctrl_len += MPA_CRC_SIZE; return PKT_COMPLETE; } c_tx->ctrl_len += MPA_CRC_SIZE; c_tx->sge_idx = 0; c_tx->sge_off = 0; c_tx->pbl_idx = 0; /*