From patchwork Sun Apr 24 10:41:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhenwei pi X-Patchwork-Id: 565799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E5EBC433EF for ; Sun, 24 Apr 2022 10:46:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239131AbiDXKtC (ORCPT ); Sun, 24 Apr 2022 06:49:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239104AbiDXKsq (ORCPT ); Sun, 24 Apr 2022 06:48:46 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EDAB20BED for ; Sun, 24 Apr 2022 03:45:45 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id q3so20846163plg.3 for ; Sun, 24 Apr 2022 03:45:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UsAhpE+ugC3O5yltAsuOPgGj88elpiVZxkM4bSAZJVI=; b=CDPq45Eju4Udy1f/3D6Pgk+8kUjQzM6KhnVJojoEE5FKwex4nZ5L1UM9DRHQVoYGYF V7xtY5T+QZVO+YvYHKTxY2zcvwss27qoqS5V9nsCNnquT1esJgZnhFmiwonfvbYxoUZP V/SoodAVkcVl5VDO0OJ9yrrptHf4FhPQpvuJ8WPAmkStC768aCPo4vea48xuHq1BORd4 c/5/juNv1kQi2bYn3wDOsGhWjbObWk62EvPI1ObTs/PdsMAKc2cr8sVzvMjQCtF8+3g0 iCLheoLG9y/4CjdVFr1UEsGLT+o0+0kbXkMGo/eCSZkTBpQqSYX5nHj/XwsCdcpSJs3N qSYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UsAhpE+ugC3O5yltAsuOPgGj88elpiVZxkM4bSAZJVI=; b=NapKac+GhTPyISeq3XPZg7+q9Nohrp9UTqxPIoN/vgNbJ3wH+07d5PyUuDgG0uQuXd 6gGM4YzuGdw3KYwiUY9qr/qqwZNWFOKfALbLNTWnMaK+Hw5ISXNZOp4MZLhSMcxMWzRB NUPYlTYEK55AwKMaGIt8Yho8KrjWolQqD++BZoAP3j3C275Vxs9K3HQhr+b0n/lScp3B cmF1Vrb3GdOV53J9KNOJNTIw33214zWW1c0S4MLhq5V54kkzIAC34WIRVfGI4GAP097f rPdG1m9IHRfSOegKae+fUzYAoyZcMkr+c0wFMkoDEi+SmFeBIRP9or/zDL7y2DpZoYw3 7lVw== X-Gm-Message-State: AOAM530pPhbFWB9gOFfcac57q5m+rXw92ZQnBH8UpGTJjxrqmm+sj58e VmJ/Hp1rVQ/UHLplaaZVTBx4Hw== X-Google-Smtp-Source: ABdhPJy43fQlBPZDmMQtOyeaXbRgcCetLjUR1wmMWKTV8ELw8Mm1SFDDvLrIoITjZT4S9T1VAij4LQ== X-Received: by 2002:a17:90a:a82:b0:1c9:ef95:486 with SMTP id 2-20020a17090a0a8200b001c9ef950486mr25677254pjw.93.1650797145068; Sun, 24 Apr 2022 03:45:45 -0700 (PDT) Received: from always-x1.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id y2-20020a056a00190200b004fa865d1fd3sm8287295pfi.86.2022.04.24.03.45.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Apr 2022 03:45:44 -0700 (PDT) From: zhenwei pi To: arei.gonglei@huawei.com, mst@redhat.com, jasowang@redhat.com Cc: herbert@gondor.apana.org.au, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-crypto@vger.kernel.org, helei.sig11@bytedance.com, davem@davemloft.net, zhenwei pi Subject: [PATCH v4 2/5] virtio-crypto: use private buffer for control request Date: Sun, 24 Apr 2022 18:41:37 +0800 Message-Id: <20220424104140.44841-3-pizhenwei@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220424104140.44841-1-pizhenwei@bytedance.com> References: <20220424104140.44841-1-pizhenwei@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Originally, all of the control requests share a single buffer( ctrl & input & ctrl_status fields in struct virtio_crypto), this allows queue depth 1 only, the performance of control queue gets limited by this design. In this patch, each request allocates request buffer dynamically, and free buffer after request, so the scope protected by ctrl_lock also get optimized here. It's possible to optimize control queue depth in the next step. A necessary comment is already in code, still describe it again: /* * Note: there are padding fields in request, clear them to zero before * sending to host to avoid to divulge any information. * Ex, virtio_crypto_ctrl_request::ctrl::u::destroy_session::padding[48] */ So use kzalloc to allocate buffer of struct virtio_crypto_ctrl_request. Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Gonglei Signed-off-by: zhenwei pi Reported-by: kernel test robot Reported-by: Dan Carpenter --- .../virtio/virtio_crypto_akcipher_algs.c | 41 +++++++++++---- drivers/crypto/virtio/virtio_crypto_common.h | 17 +++++-- .../virtio/virtio_crypto_skcipher_algs.c | 50 ++++++++++++------- 3 files changed, 75 insertions(+), 33 deletions(-) diff --git a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c index 20901a263fc8..509884e8b201 100644 --- a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c +++ b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c @@ -108,16 +108,22 @@ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher unsigned int num_out = 0, num_in = 0; struct virtio_crypto_op_ctrl_req *ctrl; struct virtio_crypto_session_input *input; + struct virtio_crypto_ctrl_request *vc_ctrl_req; pkey = kmemdup(key, keylen, GFP_ATOMIC); if (!pkey) return -ENOMEM; - spin_lock(&vcrypto->ctrl_lock); - ctrl = &vcrypto->ctrl; + vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL); + if (!vc_ctrl_req) { + err = -ENOMEM; + goto out; + } + + ctrl = &vc_ctrl_req->ctrl; memcpy(&ctrl->header, header, sizeof(ctrl->header)); memcpy(&ctrl->u, para, sizeof(ctrl->u)); - input = &vcrypto->input; + input = &vc_ctrl_req->input; input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR); sg_init_one(&outhdr_sg, ctrl, sizeof(*ctrl)); @@ -129,14 +135,18 @@ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher sg_init_one(&inhdr_sg, input, sizeof(*input)); sgs[num_out + num_in++] = &inhdr_sg; + spin_lock(&vcrypto->ctrl_lock); err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, num_out, num_in, vcrypto, GFP_ATOMIC); - if (err < 0) + if (err < 0) { + spin_unlock(&vcrypto->ctrl_lock); goto out; + } virtqueue_kick(vcrypto->ctrl_vq); while (!virtqueue_get_buf(vcrypto->ctrl_vq, &inlen) && !virtqueue_is_broken(vcrypto->ctrl_vq)) cpu_relax(); + spin_unlock(&vcrypto->ctrl_lock); if (le32_to_cpu(input->status) != VIRTIO_CRYPTO_OK) { err = -EINVAL; @@ -148,7 +158,7 @@ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher err = 0; out: - spin_unlock(&vcrypto->ctrl_lock); + kfree(vc_ctrl_req); kfree_sensitive(pkey); if (err < 0) @@ -167,15 +177,22 @@ static int virtio_crypto_alg_akcipher_close_session(struct virtio_crypto_akciphe int err; struct virtio_crypto_op_ctrl_req *ctrl; struct virtio_crypto_inhdr *ctrl_status; + struct virtio_crypto_ctrl_request *vc_ctrl_req; - spin_lock(&vcrypto->ctrl_lock); if (!ctx->session_valid) { err = 0; goto out; } - ctrl_status = &vcrypto->ctrl_status; + + vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL); + if (!vc_ctrl_req) { + err = -ENOMEM; + goto out; + } + + ctrl_status = &vc_ctrl_req->ctrl_status; ctrl_status->status = VIRTIO_CRYPTO_ERR; - ctrl = &vcrypto->ctrl; + ctrl = &vc_ctrl_req->ctrl; ctrl->header.opcode = cpu_to_le32(VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION); ctrl->header.queue_id = 0; @@ -188,14 +205,18 @@ static int virtio_crypto_alg_akcipher_close_session(struct virtio_crypto_akciphe sg_init_one(&inhdr_sg, &ctrl_status->status, sizeof(ctrl_status->status)); sgs[num_out + num_in++] = &inhdr_sg; + spin_lock(&vcrypto->ctrl_lock); err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, num_out, num_in, vcrypto, GFP_ATOMIC); - if (err < 0) + if (err < 0) { + spin_unlock(&vcrypto->ctrl_lock); goto out; + } virtqueue_kick(vcrypto->ctrl_vq); while (!virtqueue_get_buf(vcrypto->ctrl_vq, &inlen) && !virtqueue_is_broken(vcrypto->ctrl_vq)) cpu_relax(); + spin_unlock(&vcrypto->ctrl_lock); if (ctrl_status->status != VIRTIO_CRYPTO_OK) { err = -EINVAL; @@ -206,7 +227,7 @@ static int virtio_crypto_alg_akcipher_close_session(struct virtio_crypto_akciphe ctx->session_valid = false; out: - spin_unlock(&vcrypto->ctrl_lock); + kfree(vc_ctrl_req); if (err < 0) { pr_err("virtio_crypto: Close session failed status: %u, session_id: 0x%llx\n", ctrl_status->status, destroy_session->session_id); diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h index e693d4ee83a6..2422237ec4e6 100644 --- a/drivers/crypto/virtio/virtio_crypto_common.h +++ b/drivers/crypto/virtio/virtio_crypto_common.h @@ -13,6 +13,7 @@ #include #include #include +#include /* Internal representation of a data virtqueue */ @@ -65,11 +66,6 @@ struct virtio_crypto { /* Maximum size of per request */ u64 max_size; - /* Control VQ buffers: protected by the ctrl_lock */ - struct virtio_crypto_op_ctrl_req ctrl; - struct virtio_crypto_session_input input; - struct virtio_crypto_inhdr ctrl_status; - unsigned long status; atomic_t ref_count; struct list_head list; @@ -85,6 +81,17 @@ struct virtio_crypto_sym_session_info { __u64 session_id; }; +/* + * Note: there are padding fields in request, clear them to zero before + * sending to host to avoid to divulge any information. + * Ex, virtio_crypto_ctrl_request::ctrl::u::destroy_session::padding[48] + */ +struct virtio_crypto_ctrl_request { + struct virtio_crypto_op_ctrl_req ctrl; + struct virtio_crypto_session_input input; + struct virtio_crypto_inhdr ctrl_status; +}; + struct virtio_crypto_request; typedef void (*virtio_crypto_data_callback) (struct virtio_crypto_request *vc_req, int len); diff --git a/drivers/crypto/virtio/virtio_crypto_skcipher_algs.c b/drivers/crypto/virtio/virtio_crypto_skcipher_algs.c index e3c5bc8d6112..6aaf0869b211 100644 --- a/drivers/crypto/virtio/virtio_crypto_skcipher_algs.c +++ b/drivers/crypto/virtio/virtio_crypto_skcipher_algs.c @@ -126,6 +126,7 @@ static int virtio_crypto_alg_skcipher_init_session( struct virtio_crypto_op_ctrl_req *ctrl; struct virtio_crypto_session_input *input; struct virtio_crypto_sym_create_session_req *sym_create_session; + struct virtio_crypto_ctrl_request *vc_ctrl_req; /* * Avoid to do DMA from the stack, switch to using @@ -136,15 +137,20 @@ static int virtio_crypto_alg_skcipher_init_session( if (!cipher_key) return -ENOMEM; - spin_lock(&vcrypto->ctrl_lock); + vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL); + if (!vc_ctrl_req) { + err = -ENOMEM; + goto out; + } + /* Pad ctrl header */ - ctrl = &vcrypto->ctrl; + ctrl = &vc_ctrl_req->ctrl; ctrl->header.opcode = cpu_to_le32(VIRTIO_CRYPTO_CIPHER_CREATE_SESSION); ctrl->header.algo = cpu_to_le32(alg); /* Set the default dataqueue id to 0 */ ctrl->header.queue_id = 0; - input = &vcrypto->input; + input = &vc_ctrl_req->input; input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR); /* Pad cipher's parameters */ sym_create_session = &ctrl->u.sym_create_session; @@ -164,12 +170,12 @@ static int virtio_crypto_alg_skcipher_init_session( sg_init_one(&inhdr, input, sizeof(*input)); sgs[num_out + num_in++] = &inhdr; + spin_lock(&vcrypto->ctrl_lock); err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, num_out, num_in, vcrypto, GFP_ATOMIC); if (err < 0) { spin_unlock(&vcrypto->ctrl_lock); - kfree_sensitive(cipher_key); - return err; + goto out; } virtqueue_kick(vcrypto->ctrl_vq); @@ -180,13 +186,13 @@ static int virtio_crypto_alg_skcipher_init_session( while (!virtqueue_get_buf(vcrypto->ctrl_vq, &tmp) && !virtqueue_is_broken(vcrypto->ctrl_vq)) cpu_relax(); + spin_unlock(&vcrypto->ctrl_lock); if (le32_to_cpu(input->status) != VIRTIO_CRYPTO_OK) { - spin_unlock(&vcrypto->ctrl_lock); pr_err("virtio_crypto: Create session failed status: %u\n", le32_to_cpu(input->status)); - kfree_sensitive(cipher_key); - return -EINVAL; + err = -EINVAL; + goto out; } if (encrypt) @@ -194,10 +200,11 @@ static int virtio_crypto_alg_skcipher_init_session( else ctx->dec_sess_info.session_id = le64_to_cpu(input->session_id); - spin_unlock(&vcrypto->ctrl_lock); - + err = 0; +out: + kfree(vc_ctrl_req); kfree_sensitive(cipher_key); - return 0; + return err; } static int virtio_crypto_alg_skcipher_close_session( @@ -212,12 +219,16 @@ static int virtio_crypto_alg_skcipher_close_session( unsigned int num_out = 0, num_in = 0; struct virtio_crypto_op_ctrl_req *ctrl; struct virtio_crypto_inhdr *ctrl_status; + struct virtio_crypto_ctrl_request *vc_ctrl_req; - spin_lock(&vcrypto->ctrl_lock); - ctrl_status = &vcrypto->ctrl_status; + vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL); + if (!vc_ctrl_req) + return -ENOMEM; + + ctrl_status = &vc_ctrl_req->ctrl_status; ctrl_status->status = VIRTIO_CRYPTO_ERR; /* Pad ctrl header */ - ctrl = &vcrypto->ctrl; + ctrl = &vc_ctrl_req->ctrl; ctrl->header.opcode = cpu_to_le32(VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION); /* Set the default virtqueue id to 0 */ ctrl->header.queue_id = 0; @@ -236,28 +247,31 @@ static int virtio_crypto_alg_skcipher_close_session( sg_init_one(&status_sg, &ctrl_status->status, sizeof(ctrl_status->status)); sgs[num_out + num_in++] = &status_sg; + spin_lock(&vcrypto->ctrl_lock); err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, num_out, num_in, vcrypto, GFP_ATOMIC); if (err < 0) { spin_unlock(&vcrypto->ctrl_lock); - return err; + goto out; } virtqueue_kick(vcrypto->ctrl_vq); while (!virtqueue_get_buf(vcrypto->ctrl_vq, &tmp) && !virtqueue_is_broken(vcrypto->ctrl_vq)) cpu_relax(); + spin_unlock(&vcrypto->ctrl_lock); if (ctrl_status->status != VIRTIO_CRYPTO_OK) { - spin_unlock(&vcrypto->ctrl_lock); pr_err("virtio_crypto: Close session failed status: %u, session_id: 0x%llx\n", ctrl_status->status, destroy_session->session_id); return -EINVAL; } - spin_unlock(&vcrypto->ctrl_lock); - return 0; + err = 0; +out: + kfree(vc_ctrl_req); + return err; } static int virtio_crypto_alg_skcipher_init_sessions( From patchwork Sun Apr 24 10:41:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhenwei pi X-Patchwork-Id: 565798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2150BC433EF for ; Sun, 24 Apr 2022 10:46:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239139AbiDXKtF (ORCPT ); Sun, 24 Apr 2022 06:49:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239104AbiDXKtC (ORCPT ); Sun, 24 Apr 2022 06:49:02 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA36F24956 for ; Sun, 24 Apr 2022 03:45:54 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id a15so12239673pfv.11 for ; Sun, 24 Apr 2022 03:45:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=px4I+RzeTWMMtdZKLYUD3e7jp1xURuzvNJsO7o4S7ms=; b=FGf98dBpvjagNdKAx1gz/khVHlDbxcMtXIymcpOlEQTvZ7bDLmKT4Am04RCOVey8Vt ieGah75ezFTyLKCImJc/u1A+PEJXMcGbrz5onNQN0C9vxShlPewcKN/5Ryd1YXtSvtIa NbsMhtvFrK6Pb855hBBaau4Z+Pwtyom6hj4VMqBa7wWsKrEEc4mw+ACkPzUoN5nGN8aR HIG/MNh++NR7fj1uegbKiu5GlAsttsouG6C0RMx8eb9QqW6Zm3lFD010+iPRoOIkDRZZ v8LTF+Q1IkD4In2BRjHr/E3Kqm1MOKQiJTTwbuuwjoBldmWmgQMOYfVCoWcTvPyzq/Bo vvrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=px4I+RzeTWMMtdZKLYUD3e7jp1xURuzvNJsO7o4S7ms=; b=Fql9ERO9Rt8opi+cCjsu3A4aHrasEerodqI4q4/pg59adRfSdw1nl5W0xu6PMmQ99e zzWgq5XDuyBFZPl26LZlzkRfKXKYRcJGl6Z8BpUTO1x46YqaOossV1usoryZiBI9h/Rx h7UWsXEdpaLq+jCEBf2P51Thv0qTToc26wBb/GTrC4/pPrr3XltqSTFYvq4gV2ZIyudW kjaanIe8M1ivE5iw/NIl4JVDkbITpV5cFUTRmUAgC9HqKvuWaA/ddJ33nkAhzLV6IzIu RyL9we5HCOYD8vr/bErIpF2EWcEbfJOtpWT7A5N30xaHxc9c+zihrxlar/xEgs9vsQ3C NVww== X-Gm-Message-State: AOAM531cUgeVi/Tyx5WU44W1XI7lLmNYZHVi8dayslxdz3w+bjfLCQ7u EeYqoqj0Ut1Rx5jDFZ5Z6V/vyg== X-Google-Smtp-Source: ABdhPJwJx/jeBsrcyfzqw3BwOB5hWkRSrCnP2UooawE0svZYmbsKlXo42bKGvoqpZSZOVMWjI3ks0w== X-Received: by 2002:aa7:9041:0:b0:4fe:3d6c:1739 with SMTP id n1-20020aa79041000000b004fe3d6c1739mr13689501pfo.13.1650797154126; Sun, 24 Apr 2022 03:45:54 -0700 (PDT) Received: from always-x1.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id y2-20020a056a00190200b004fa865d1fd3sm8287295pfi.86.2022.04.24.03.45.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Apr 2022 03:45:53 -0700 (PDT) From: zhenwei pi To: arei.gonglei@huawei.com, mst@redhat.com, jasowang@redhat.com Cc: herbert@gondor.apana.org.au, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-crypto@vger.kernel.org, helei.sig11@bytedance.com, davem@davemloft.net, zhenwei pi Subject: [PATCH v4 4/5] virtio-crypto: adjust dst_len at ops callback Date: Sun, 24 Apr 2022 18:41:39 +0800 Message-Id: <20220424104140.44841-5-pizhenwei@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220424104140.44841-1-pizhenwei@bytedance.com> References: <20220424104140.44841-1-pizhenwei@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: lei he For some akcipher operations(eg, decryption of pkcs1pad(rsa)), the length of returned result maybe less than akcipher_req->dst_len, we need to recalculate the actual dst_len through the virt-queue protocol. Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Gonglei Signed-off-by: lei he Signed-off-by: zhenwei pi --- drivers/crypto/virtio/virtio_crypto_akcipher_algs.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c index 1e98502830cf..1892901d2a71 100644 --- a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c +++ b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c @@ -90,9 +90,12 @@ static void virtio_crypto_dataq_akcipher_callback(struct virtio_crypto_request * } akcipher_req = vc_akcipher_req->akcipher_req; - if (vc_akcipher_req->opcode != VIRTIO_CRYPTO_AKCIPHER_VERIFY) + if (vc_akcipher_req->opcode != VIRTIO_CRYPTO_AKCIPHER_VERIFY) { + /* actuall length maybe less than dst buffer */ + akcipher_req->dst_len = len - sizeof(vc_req->status); sg_copy_from_buffer(akcipher_req->dst, sg_nents(akcipher_req->dst), vc_akcipher_req->dst_buf, akcipher_req->dst_len); + } virtio_crypto_akcipher_finalize_req(vc_akcipher_req, akcipher_req, error); } From patchwork Sun Apr 24 10:41:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhenwei pi X-Patchwork-Id: 565797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36100C433EF for ; Sun, 24 Apr 2022 10:46:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239197AbiDXKtQ (ORCPT ); Sun, 24 Apr 2022 06:49:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239133AbiDXKtE (ORCPT ); Sun, 24 Apr 2022 06:49:04 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39F3D25E83 for ; Sun, 24 Apr 2022 03:45:59 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id n8so20861928plh.1 for ; Sun, 24 Apr 2022 03:45:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bhiM3AjN7OQUvaBDZQVzT5dsdkup6eI4sw9Fv5sP5tE=; b=voSfwnuz4TazDsGKnkLUa3eLXrQvuZd0CeBraJuFrBeSJjuo6ZZnyyZ6IVf0DMCxDd 2jO3l101hWUMVw6fE6Q1scUwdoY55AfZrU59fgoeCmBV8NoDnheH2xHW9oOuLNYc32kS /Myz9unOyMQgMJYiLy50IfLa0wWwZPsTpzo+uY0oEcuxNFo+Xee80to77j8yDZ+BPT4y djGTubGoqSCw75gGJq5Xax5kSqm3psqt0E8BbnH2vId9AoT513Tt6AWKyFwJ+/iitQWt z87pqXiqtAqnUt/3mHP9bu0yoOsQRF02y/h4nYScuC/sgGnvBrK48Z2KddBDQnrw0ALc uUtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bhiM3AjN7OQUvaBDZQVzT5dsdkup6eI4sw9Fv5sP5tE=; b=yTkOkd8NopYANeDbLy7QIq0HA67W7DmuBMwMqisapiOmTtYdm3zP7DcvrMTgrG5V1D y4jUsD+b26gx/yq31WrqRrEqlR+KrCt3981BzFzideZ1M+QUL3P8XZ8STNks3W9epdIw zXMnXmrQLvUoRikzeKPEsXvJyrw/21wpKMKlcV3tdERFiF6cCTPkZHCfP1BJUu4Uvopb tZOqdk4TFPK8IXUyPc4D03Z+OJ2vuLNLoHB59YmzU1TiyJuvA1sEtu8OHivFzwhq97xh yb1/Qcrw/xfRR6zm78P4m6fAVQeO6vx5IvH7NrJvDtFjHmdHbWz+xgHk0MxXXJVeqoq4 LHyA== X-Gm-Message-State: AOAM530YUpu4KQ0oMERIZNMoO/4AzqrMDpohMxc+8UzV5TQq5QRtJ9Vu Z5BgaBOgn2kQqEAulizl7t96jg== X-Google-Smtp-Source: ABdhPJxsLxoBTSXLJ8lKEHWxix35/c4WKDc8Lc+R21OTU1ppRMOI/MHE4yD26NyDYhl3/dU3YYeX9Q== X-Received: by 2002:a17:902:7296:b0:14b:4bc6:e81 with SMTP id d22-20020a170902729600b0014b4bc60e81mr12832804pll.132.1650797158606; Sun, 24 Apr 2022 03:45:58 -0700 (PDT) Received: from always-x1.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id y2-20020a056a00190200b004fa865d1fd3sm8287295pfi.86.2022.04.24.03.45.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Apr 2022 03:45:58 -0700 (PDT) From: zhenwei pi To: arei.gonglei@huawei.com, mst@redhat.com, jasowang@redhat.com Cc: herbert@gondor.apana.org.au, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-crypto@vger.kernel.org, helei.sig11@bytedance.com, davem@davemloft.net, zhenwei pi Subject: [PATCH v4 5/5] virtio-crypto: enable retry for virtio-crypto-dev Date: Sun, 24 Apr 2022 18:41:40 +0800 Message-Id: <20220424104140.44841-6-pizhenwei@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220424104140.44841-1-pizhenwei@bytedance.com> References: <20220424104140.44841-1-pizhenwei@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: lei he Enable retry for virtio-crypto-dev, so that crypto-engine can process cipher-requests parallelly. Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Gonglei Signed-off-by: lei he Signed-off-by: zhenwei pi --- drivers/crypto/virtio/virtio_crypto_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c index 60490ffa3df1..f67e0d4c1b0c 100644 --- a/drivers/crypto/virtio/virtio_crypto_core.c +++ b/drivers/crypto/virtio/virtio_crypto_core.c @@ -144,7 +144,8 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi) spin_lock_init(&vi->data_vq[i].lock); vi->data_vq[i].vq = vqs[i]; /* Initialize crypto engine */ - vi->data_vq[i].engine = crypto_engine_alloc_init(dev, 1); + vi->data_vq[i].engine = crypto_engine_alloc_init_and_set(dev, true, NULL, 1, + virtqueue_get_vring_size(vqs[i])); if (!vi->data_vq[i].engine) { ret = -ENOMEM; goto err_engine;