From patchwork Tue Jan 3 18:42:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 638839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB6E5C4708D for ; Tue, 3 Jan 2023 18:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238143AbjACSpQ (ORCPT ); Tue, 3 Jan 2023 13:45:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238209AbjACSos (ORCPT ); Tue, 3 Jan 2023 13:44:48 -0500 Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com [IPv6:2a00:1450:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 568A315F3F for ; Tue, 3 Jan 2023 10:43:07 -0800 (PST) Received: by mail-wr1-x429.google.com with SMTP id y8so30502282wrl.13 for ; Tue, 03 Jan 2023 10:43:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JbgIiDs9XOqs3x5JE++9owDgkz4yX2MsWocTUbw0/sU=; b=L3JVkBFECQ6Vd0S2/1jwhsAPPOkYOF3EDGIhQsoRZYDOmmRl3QreTQbB90Of9F0XqZ fGkgd28Ta8jP/UVMHRh9ZtR0E/+TspmLok7bnbjYK1pazHya3HutxhO/FTFRqo/mChhC 6Zxu89mnhUVHAB89wreQmEmuPrMTwY7M0sZJUm6vCG9DYGp9s1U440WLSNemiPrZFejf QQ65A4xqCI9DqoDVuxi/3Hel+RWg+bHO8bXWr51YD6TLO1qxqofQ3Anwcf8r1h53qJuk FymYBCfSdtv2/RWJjeUQmNI8tuSpM28qoSl1BEdEmxRhDjtUVKfqQq5RkK9qm9M+Bk1b u0Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JbgIiDs9XOqs3x5JE++9owDgkz4yX2MsWocTUbw0/sU=; b=gw7iGxXXz3329NL0mL9OuppuQ6+4vV0cNyDgq2nJ4+cwEXOO9UOeIQFHuNisE5lQ2M TQD0AzoVLQUgnWbAmhmzXbBx7yYsjzfVcUWDyKZSzd2pgESsWrTCtbIt2XMmOFy+wwmo M1WQf9Y3+QXn6uJTJIZm6Ya3WbYeH9MtfcA1eh5RpwQpiRhHyZ3lWliWoYXywmGpoXAj ij+QjL9mA8DsphbONDuavTM69bo9MhTDNGImQ7pvIfw8ot1jUuM3EOYCSmmoNcROchBy SZ5b9TE9zxqEYf+mPIxOFvjMtVx8oUSNMUTYeAgHgFZnD8xxHbu+nOJN9gTZwN30IIPv PRVA== X-Gm-Message-State: AFqh2kqZJvEh/PFvgFfYYqVhOyx8bx85VoeMWWZoXtWVBYUdMdiczEb7 ViFR9BCbFYDhD54kr6TjcGjhcQ== X-Google-Smtp-Source: AMrXdXvQqdXqcMuLZKJa29c6040pa67s2LWo65W5oOrKnrxtCKAnzLA5RaY/XvMdl0Pq++MPh/ZTlA== X-Received: by 2002:adf:f305:0:b0:277:2e27:61fa with SMTP id i5-20020adff305000000b002772e2761famr22173728wro.9.1672771385796; Tue, 03 Jan 2023 10:43:05 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:05 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 1/5] crypto: Introduce crypto_pool Date: Tue, 3 Jan 2023 18:42:53 +0000 Message-Id: <20230103184257.118069-2-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Introduce a per-CPU pool of async crypto requests that can be used in bh-disabled contexts (designed with net RX/TX softirqs as users in mind). Allocation can sleep and is a slow-path. Initial implementation has only ahash as a backend and a fix-sized array of possible algorithms used in parallel. Signed-off-by: Dmitry Safonov --- crypto/Kconfig | 6 + crypto/Makefile | 1 + crypto/crypto_pool.c | 291 ++++++++++++++++++++++++++++++++++++++++++ include/crypto/pool.h | 34 +++++ 4 files changed, 332 insertions(+) create mode 100644 crypto/crypto_pool.c create mode 100644 include/crypto/pool.h diff --git a/crypto/Kconfig b/crypto/Kconfig index 9c86f7045157..ba8d4a1f10f9 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1388,6 +1388,12 @@ endmenu config CRYPTO_HASH_INFO bool +config CRYPTO_POOL + tristate "Per-CPU crypto pool" + default n + help + Per-CPU pool of crypto requests ready for usage in atomic contexts. + if !KMSAN # avoid false positives from assembly if ARM source "arch/arm/crypto/Kconfig" diff --git a/crypto/Makefile b/crypto/Makefile index d0126c915834..eed8f61bc93b 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRYPTO_ACOMP2) += crypto_acompress.o cryptomgr-y := algboss.o testmgr.o obj-$(CONFIG_CRYPTO_MANAGER2) += cryptomgr.o +obj-$(CONFIG_CRYPTO_POOL) += crypto_pool.o obj-$(CONFIG_CRYPTO_USER) += crypto_user.o crypto_user-y := crypto_user_base.o crypto_user-$(CONFIG_CRYPTO_STATS) += crypto_user_stat.o diff --git a/crypto/crypto_pool.c b/crypto/crypto_pool.c new file mode 100644 index 000000000000..37131952c5a7 --- /dev/null +++ b/crypto/crypto_pool.c @@ -0,0 +1,291 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include +#include +#include +#include +#include +#include + +static unsigned long scratch_size = DEFAULT_CRYPTO_POOL_SCRATCH_SZ; +static DEFINE_PER_CPU(void *, crypto_pool_scratch); + +struct crypto_pool_entry { + struct ahash_request * __percpu *req; + const char *alg; + struct kref kref; + bool needs_key; +}; + +#define CPOOL_SIZE (PAGE_SIZE/sizeof(struct crypto_pool_entry)) +static struct crypto_pool_entry cpool[CPOOL_SIZE]; +static unsigned int cpool_populated; +static DEFINE_MUTEX(cpool_mutex); + +static int crypto_pool_scratch_alloc(void) +{ + int cpu; + + lockdep_assert_held(&cpool_mutex); + + for_each_possible_cpu(cpu) { + void *scratch = per_cpu(crypto_pool_scratch, cpu); + + if (scratch) + continue; + + scratch = kmalloc_node(scratch_size, GFP_KERNEL, + cpu_to_node(cpu)); + if (!scratch) + return -ENOMEM; + per_cpu(crypto_pool_scratch, cpu) = scratch; + } + return 0; +} + +static void crypto_pool_scratch_free(void) +{ + int cpu; + + lockdep_assert_held(&cpool_mutex); + + for_each_possible_cpu(cpu) { + void *scratch = per_cpu(crypto_pool_scratch, cpu); + + if (!scratch) + continue; + per_cpu(crypto_pool_scratch, cpu) = NULL; + kfree(scratch); + } +} + +static int __cpool_alloc_ahash(struct crypto_pool_entry *e, const char *alg) +{ + struct crypto_ahash *hash; + int cpu, ret = -ENOMEM; + + e->alg = kstrdup(alg, GFP_KERNEL); + if (!e->alg) + return -ENOMEM; + + e->req = alloc_percpu(struct ahash_request *); + if (!e->req) + goto out_free_alg; + + hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) { + ret = PTR_ERR(hash); + goto out_free_req; + } + + /* If hash has .setkey(), allocate ahash per-cpu, not only request */ + e->needs_key = crypto_ahash_get_flags(hash) & CRYPTO_TFM_NEED_KEY; + + for_each_possible_cpu(cpu) { + struct ahash_request *req; + + if (!hash) + hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) + goto out_free; + + req = ahash_request_alloc(hash, GFP_KERNEL); + if (!req) + goto out_free; + + ahash_request_set_callback(req, 0, NULL, NULL); + + *per_cpu_ptr(e->req, cpu) = req; + + if (e->needs_key) + hash = NULL; + } + kref_init(&e->kref); + return 0; + +out_free: + if (!IS_ERR_OR_NULL(hash) && e->needs_key) + crypto_free_ahash(hash); + + for_each_possible_cpu(cpu) { + if (*per_cpu_ptr(e->req, cpu) == NULL) + break; + hash = crypto_ahash_reqtfm(*per_cpu_ptr(e->req, cpu)); + ahash_request_free(*per_cpu_ptr(e->req, cpu)); + if (e->needs_key) { + crypto_free_ahash(hash); + hash = NULL; + } + } + + if (hash) + crypto_free_ahash(hash); +out_free_req: + free_percpu(e->req); +out_free_alg: + kfree(e->alg); + e->alg = NULL; + return ret; +} + +/** + * crypto_pool_alloc_ahash - allocates pool for ahash requests + * @alg: name of async hash algorithm + */ +int crypto_pool_alloc_ahash(const char *alg) +{ + int i, ret; + + /* slow-path */ + mutex_lock(&cpool_mutex); + + for (i = 0; i < cpool_populated; i++) { + if (cpool[i].alg && !strcmp(cpool[i].alg, alg)) { + if (kref_read(&cpool[i].kref) > 0) { + kref_get(&cpool[i].kref); + ret = i; + goto out; + } else { + break; + } + } + } + + for (i = 0; i < cpool_populated; i++) { + if (!cpool[i].alg) + break; + } + if (i >= CPOOL_SIZE) { + ret = -ENOSPC; + goto out; + } + + ret = __cpool_alloc_ahash(&cpool[i], alg); + if (!ret) { + ret = i; + if (i == cpool_populated) + cpool_populated++; + } +out: + mutex_unlock(&cpool_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(crypto_pool_alloc_ahash); + +static void __cpool_free_entry(struct crypto_pool_entry *e) +{ + struct crypto_ahash *hash = NULL; + int cpu; + + for_each_possible_cpu(cpu) { + if (*per_cpu_ptr(e->req, cpu) == NULL) + continue; + + hash = crypto_ahash_reqtfm(*per_cpu_ptr(e->req, cpu)); + ahash_request_free(*per_cpu_ptr(e->req, cpu)); + if (e->needs_key) { + crypto_free_ahash(hash); + hash = NULL; + } + } + if (hash) + crypto_free_ahash(hash); + free_percpu(e->req); + kfree(e->alg); + memset(e, 0, sizeof(*e)); +} + +static void cpool_cleanup_work_cb(struct work_struct *work) +{ + unsigned int i; + bool free_scratch = true; + + mutex_lock(&cpool_mutex); + for (i = 0; i < cpool_populated; i++) { + if (kref_read(&cpool[i].kref) > 0) { + free_scratch = false; + continue; + } + if (!cpool[i].alg) + continue; + __cpool_free_entry(&cpool[i]); + } + if (free_scratch) + crypto_pool_scratch_free(); + mutex_unlock(&cpool_mutex); +} + +static DECLARE_WORK(cpool_cleanup_work, cpool_cleanup_work_cb); +static void cpool_schedule_cleanup(struct kref *kref) +{ + schedule_work(&cpool_cleanup_work); +} + +/** + * crypto_pool_release - decreases number of users for a pool. If it was + * the last user of the pool, releases any memory that was consumed. + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + */ +void crypto_pool_release(unsigned int id) +{ + if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) + return; + + /* slow-path */ + kref_put(&cpool[id].kref, cpool_schedule_cleanup); +} +EXPORT_SYMBOL_GPL(crypto_pool_release); + +/** + * crypto_pool_add - increases number of users (refcounter) for a pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + */ +void crypto_pool_add(unsigned int id) +{ + if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) + return; + kref_get(&cpool[id].kref); +} +EXPORT_SYMBOL_GPL(crypto_pool_add); + +/** + * crypto_pool_get - disable bh and start using crypto_pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + * @c: returned crypto_pool for usage (uninitialized on failure) + */ +int crypto_pool_get(unsigned int id, struct crypto_pool *c) +{ + struct crypto_pool_ahash *ret = (struct crypto_pool_ahash *)c; + + local_bh_disable(); + if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) { + local_bh_enable(); + return -EINVAL; + } + ret->req = *this_cpu_ptr(cpool[id].req); + ret->base.scratch = this_cpu_read(crypto_pool_scratch); + return 0; +} +EXPORT_SYMBOL_GPL(crypto_pool_get); + +/** + * crypto_pool_algo - return algorithm of crypto_pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + * @buf: buffer to return name of algorithm + * @buf_len: size of @buf + */ +size_t crypto_pool_algo(unsigned int id, char *buf, size_t buf_len) +{ + size_t ret = 0; + + /* slow-path */ + mutex_lock(&cpool_mutex); + if (cpool[id].alg) + ret = strscpy(buf, cpool[id].alg, buf_len); + mutex_unlock(&cpool_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(crypto_pool_algo); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Per-CPU pool of crypto requests"); diff --git a/include/crypto/pool.h b/include/crypto/pool.h new file mode 100644 index 000000000000..2c61aa45faff --- /dev/null +++ b/include/crypto/pool.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _CRYPTO_POOL_H +#define _CRYPTO_POOL_H + +#include + +#define DEFAULT_CRYPTO_POOL_SCRATCH_SZ 128 + +struct crypto_pool { + void *scratch; +}; + +/* + * struct crypto_pool_ahash - per-CPU pool of ahash_requests + * @base: common members that can be used by any async crypto ops + * @req: pre-allocated ahash request + */ +struct crypto_pool_ahash { + struct crypto_pool base; + struct ahash_request *req; +}; + +int crypto_pool_alloc_ahash(const char *alg); +void crypto_pool_add(unsigned int id); +void crypto_pool_release(unsigned int id); + +int crypto_pool_get(unsigned int id, struct crypto_pool *c); +static inline void crypto_pool_put(void) +{ + local_bh_enable(); +} +size_t crypto_pool_algo(unsigned int id, char *buf, size_t buf_len); + +#endif /* _CRYPTO_POOL_H */ From patchwork Tue Jan 3 18:42:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 638838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48A2EC53210 for ; Tue, 3 Jan 2023 18:46:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233929AbjACSpy (ORCPT ); Tue, 3 Jan 2023 13:45:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238197AbjACSox (ORCPT ); Tue, 3 Jan 2023 13:44:53 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCC5D17068 for ; Tue, 3 Jan 2023 10:43:08 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id az7so6147748wrb.5 for ; Tue, 03 Jan 2023 10:43:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2Q77snMsSA0RHjgcdnmWVPKGZuaieErtKpldxtmJJME=; b=dvBV21axKPcLoMyw1pGtEzYZsngO7g9WU5N4VAH1JYq1HIpghFY3iv8RFCgMdyc76y 5HNPlMYHZiGl56bQTsEA7o41nSUVH5px7OytGAVClmsU9vHhuLLIRogdnMYhGZu2v/dK 3er8vsGzR34/gWzjG6s2FcDAtpyz7KCSLlNus5/3ESC5xMOFxw/Iez3aiVNYrmEy88Bl phWnf6j/hh6CVFa1I0qxkNRc5xm42TGleNlc924sK2e7HZ4P0vwxkoWmMEC/rIdVDCR8 25yIMLhwnRPJ8eB1EXToQzXgi8bVs1VyI0AWBjLiG1XJJNSYBxud9HcLlTQnU2GFXD9f 0Lsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2Q77snMsSA0RHjgcdnmWVPKGZuaieErtKpldxtmJJME=; b=JdhdCuQYGQ43MOnOeI3QTqxZ1T3InxC3RuI4L5rNJo02T60MJn5VNalz2qxmjuDOqk ngcR134EUELRfg53gwSCb7iFfRopUBEsymN5WBsec5JBldfw0s4LKITAHvxBmoY4PEpY RGr5WmpBTYiC/2cDx5bJ0VioNmjOMYUOe0djgsfmw0QVrzRitYZiGiFqEsRQUVcVdQQv MU8iwsGPP0ZUmrgF3AEkjeHu1wPcLX736c5SWHrAZsHH/l3XDG1FODH/4T6MOxO5y8z9 2jdUsI9vLSIarzz9NLWIPskwrjFLNU4i8kBWdBsPO+k3WweNy5AJonjdr/uOASYXdEv3 zVHg== X-Gm-Message-State: AFqh2koVBOzPWyTbGH+wApwkAXy/8yFG/BO5GQPIGoK1kILnC79oGASE FO+NM7oq+x30SzeZYuja1df/Ew== X-Google-Smtp-Source: AMrXdXv7NBse0h6eN97Y2c+XcC1/MykwtmTEZXJg2ITlSd0XQmlYvpKnc1nBAbdU5DIDydeIr+6nvw== X-Received: by 2002:a5d:4911:0:b0:238:8896:788b with SMTP id x17-20020a5d4911000000b002388896788bmr28961788wrq.26.1672771387192; Tue, 03 Jan 2023 10:43:07 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:06 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 2/5] crypto/pool: Add crypto_pool_reserve_scratch() Date: Tue, 3 Jan 2023 18:42:54 +0000 Message-Id: <20230103184257.118069-3-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of having build-time hardcoded constant, reallocate scratch area, if needed by user. Different algos, different users may need different size of temp per-CPU buffer. Only up-sizing supported for simplicity. Signed-off-by: Dmitry Safonov --- crypto/Kconfig | 6 ++++ crypto/crypto_pool.c | 77 ++++++++++++++++++++++++++++++++++--------- include/crypto/pool.h | 3 +- 3 files changed, 69 insertions(+), 17 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index ba8d4a1f10f9..0614c2acfffa 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1394,6 +1394,12 @@ config CRYPTO_POOL help Per-CPU pool of crypto requests ready for usage in atomic contexts. +config CRYPTO_POOL_DEFAULT_SCRATCH_SIZE + hex "Per-CPU default scratch area size" + depends on CRYPTO_POOL + default 0x100 + range 0x100 0x10000 + if !KMSAN # avoid false positives from assembly if ARM source "arch/arm/crypto/Kconfig" diff --git a/crypto/crypto_pool.c b/crypto/crypto_pool.c index 37131952c5a7..0cd9eade7b73 100644 --- a/crypto/crypto_pool.c +++ b/crypto/crypto_pool.c @@ -1,13 +1,14 @@ // SPDX-License-Identifier: GPL-2.0-or-later #include +#include #include #include #include #include #include -static unsigned long scratch_size = DEFAULT_CRYPTO_POOL_SCRATCH_SZ; +static unsigned long scratch_size = CONFIG_CRYPTO_POOL_DEFAULT_SCRATCH_SIZE; static DEFINE_PER_CPU(void *, crypto_pool_scratch); struct crypto_pool_entry { @@ -22,26 +23,69 @@ static struct crypto_pool_entry cpool[CPOOL_SIZE]; static unsigned int cpool_populated; static DEFINE_MUTEX(cpool_mutex); -static int crypto_pool_scratch_alloc(void) +/* Slow-path */ +/** + * crypto_pool_reserve_scratch - re-allocates scratch buffer, slow-path + * @size: request size for the scratch/temp buffer + */ +int crypto_pool_reserve_scratch(unsigned long size) { - int cpu; - - lockdep_assert_held(&cpool_mutex); +#define FREE_BATCH_SIZE 64 + void *free_batch[FREE_BATCH_SIZE]; + int cpu, err = 0; + unsigned int i = 0; + mutex_lock(&cpool_mutex); + if (size == scratch_size) { + for_each_possible_cpu(cpu) { + if (per_cpu(crypto_pool_scratch, cpu)) + continue; + goto allocate_scratch; + } + mutex_unlock(&cpool_mutex); + return 0; + } +allocate_scratch: + size = max(size, scratch_size); + cpus_read_lock(); for_each_possible_cpu(cpu) { - void *scratch = per_cpu(crypto_pool_scratch, cpu); + void *scratch, *old_scratch; - if (scratch) + scratch = kmalloc_node(size, GFP_KERNEL, cpu_to_node(cpu)); + if (!scratch) { + err = -ENOMEM; + break; + } + + old_scratch = per_cpu(crypto_pool_scratch, cpu); + /* Pairs with crypto_pool_get() */ + WRITE_ONCE(*per_cpu_ptr(&crypto_pool_scratch, cpu), scratch); + if (!cpu_online(cpu)) { + kfree(old_scratch); continue; + } + free_batch[i++] = old_scratch; + if (i == FREE_BATCH_SIZE) { + cpus_read_unlock(); + synchronize_rcu(); + while (i > 0) + kfree(free_batch[--i]); + cpus_read_lock(); + } + } + cpus_read_unlock(); + if (!err) + scratch_size = size; + mutex_unlock(&cpool_mutex); - scratch = kmalloc_node(scratch_size, GFP_KERNEL, - cpu_to_node(cpu)); - if (!scratch) - return -ENOMEM; - per_cpu(crypto_pool_scratch, cpu) = scratch; + if (i > 0) { + synchronize_rcu(); + while (i > 0) + kfree(free_batch[--i]); } - return 0; + return err; } +EXPORT_SYMBOL_GPL(crypto_pool_reserve_scratch); static void crypto_pool_scratch_free(void) { @@ -138,7 +182,6 @@ int crypto_pool_alloc_ahash(const char *alg) /* slow-path */ mutex_lock(&cpool_mutex); - for (i = 0; i < cpool_populated; i++) { if (cpool[i].alg && !strcmp(cpool[i].alg, alg)) { if (kref_read(&cpool[i].kref) > 0) { @@ -263,7 +306,11 @@ int crypto_pool_get(unsigned int id, struct crypto_pool *c) return -EINVAL; } ret->req = *this_cpu_ptr(cpool[id].req); - ret->base.scratch = this_cpu_read(crypto_pool_scratch); + /* + * Pairs with crypto_pool_reserve_scratch(), scartch area is + * valid (allocated) until crypto_pool_put(). + */ + ret->base.scratch = READ_ONCE(*this_cpu_ptr(&crypto_pool_scratch)); return 0; } EXPORT_SYMBOL_GPL(crypto_pool_get); diff --git a/include/crypto/pool.h b/include/crypto/pool.h index 2c61aa45faff..c7d817860cc3 100644 --- a/include/crypto/pool.h +++ b/include/crypto/pool.h @@ -4,8 +4,6 @@ #include -#define DEFAULT_CRYPTO_POOL_SCRATCH_SZ 128 - struct crypto_pool { void *scratch; }; @@ -20,6 +18,7 @@ struct crypto_pool_ahash { struct ahash_request *req; }; +int crypto_pool_reserve_scratch(unsigned long size); int crypto_pool_alloc_ahash(const char *alg); void crypto_pool_add(unsigned int id); void crypto_pool_release(unsigned int id); From patchwork Tue Jan 3 18:42:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 638837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9784EC54EF0 for ; Tue, 3 Jan 2023 18:46:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238097AbjACSp6 (ORCPT ); Tue, 3 Jan 2023 13:45:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238264AbjACSo7 (ORCPT ); Tue, 3 Jan 2023 13:44:59 -0500 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28A7E1580D for ; Tue, 3 Jan 2023 10:43:11 -0800 (PST) Received: by mail-wm1-x335.google.com with SMTP id l26so21959612wme.5 for ; Tue, 03 Jan 2023 10:43:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qq6zo5G5wGoRxe187RE2w0C2j5Mik32afh8fD5f/eMA=; b=VXackr9pNKYDV+ViZR9LTX4VDXsRtuqBHlcn6pLvTK2CjxyBucFSeU5kO8xvL6g0nv 6/LT+EH0KbOsbYfC5YDeTSkKa92Tkxm+mlH7ZDGkay3Kx3T+m1yoDpRSDNON9ItPYVfR oVcrC16Ce61g3HfVmykr4SSncCNkjPnfxJWEzpBk9aayyrBnwSFOIPmxYBct8KqorIHC Pcqxvmi5aCJBlCfaPYfbemSz62OVGsKN1R9qtYoEdSOIdT05/OBUGfeBThVfPVKn724g t0MrUEZIOQhXMQjMJrxrNQ8VOZhf4GZc7fM0ASOFni97oCAUCKJm3pZq8jVElMkpB13o AXnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qq6zo5G5wGoRxe187RE2w0C2j5Mik32afh8fD5f/eMA=; b=jHogE2tUTXffvB7TXjrRjPcjDFYxiqE4lcqesj1+m4y8UTIzawjLfABNILpTNCmGde Zm06I+GwKPbG0fxLwuQCOgJTi8uCUjwHBISPhP4wICKSuoCqFyPk8hMAXM5SrkAL8dr2 UyN3CkSgPF8Ps07xBWCE0RSHs000KzglXxmE0MLW2LWhDp/+vHBOFN40HwcLQDlihzLJ f1IogrhQRBG+CqFTgs+DaDRVDW1u26DkPbOagWKCgJt2kXxTVkBJVPLJQbPWGdylUeQx QRIcVfe8GubPCLtiVRkSUJvrulMNY19RNsmvDL/m//fXiUkm4Eq5i6E+iAVIbGVnUlOu YkOw== X-Gm-Message-State: AFqh2krxzpTD6Mrrf+3utZIVoYnE5xdGAU2waEf7vq6xJ19iMRLWP3y5 2Z4t9OqXytHHD6ACX6JoNfSPtQ== X-Google-Smtp-Source: AMrXdXv015nFkfR8EEXcC2TLDVJDsTqsYgHOvVObi8EUt8FvHqoagnw3i6ScX6Z/xy/VZWvQVuXWtg== X-Received: by 2002:a05:600c:4d24:b0:3c6:e63e:23d4 with SMTP id u36-20020a05600c4d2400b003c6e63e23d4mr33638070wmp.3.1672771389677; Tue, 03 Jan 2023 10:43:09 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:09 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 4/5] crypto/net/ipv6: sr: Switch to using crypto_pool Date: Tue, 3 Jan 2023 18:42:56 +0000 Message-Id: <20230103184257.118069-5-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The conversion to use crypto_pool has the following upsides: - now SR uses asynchronous API which may potentially free CPU cycles and improve performance for of CPU crypto algorithm providers; - hash descriptors now don't have to be allocated on boot, but only at the moment SR starts using HMAC and until the last HMAC secret is deleted; - potentially reuse ahash_request(s) for different users - allocate only one per-CPU scratch buffer rather than a new one for each user - have a common API for net/ users that need ahash on RX/TX fast path Signed-off-by: Dmitry Safonov --- include/net/seg6_hmac.h | 7 -- net/ipv6/Kconfig | 2 +- net/ipv6/seg6.c | 3 - net/ipv6/seg6_hmac.c | 204 ++++++++++++++++------------------------ 4 files changed, 80 insertions(+), 136 deletions(-) diff --git a/include/net/seg6_hmac.h b/include/net/seg6_hmac.h index 2b5d2ee5613e..d6b7820ecda2 100644 --- a/include/net/seg6_hmac.h +++ b/include/net/seg6_hmac.h @@ -32,13 +32,6 @@ struct seg6_hmac_info { u8 alg_id; }; -struct seg6_hmac_algo { - u8 alg_id; - char name[64]; - struct crypto_shash * __percpu *tfms; - struct shash_desc * __percpu *shashs; -}; - extern int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, struct in6_addr *saddr, u8 *output); diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig index 658bfed1df8b..5be1dab0f178 100644 --- a/net/ipv6/Kconfig +++ b/net/ipv6/Kconfig @@ -304,7 +304,7 @@ config IPV6_SEG6_LWTUNNEL config IPV6_SEG6_HMAC bool "IPv6: Segment Routing HMAC support" depends on IPV6 - select CRYPTO + select CRYPTO_POOL select CRYPTO_HMAC select CRYPTO_SHA1 select CRYPTO_SHA256 diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c index 29346a6eec9f..3d66bf6d4c66 100644 --- a/net/ipv6/seg6.c +++ b/net/ipv6/seg6.c @@ -558,9 +558,6 @@ int __init seg6_init(void) void seg6_exit(void) { -#ifdef CONFIG_IPV6_SEG6_HMAC - seg6_hmac_exit(); -#endif #ifdef CONFIG_IPV6_SEG6_LWTUNNEL seg6_iptunnel_exit(); #endif diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c index d43c50a7310d..3732dd993925 100644 --- a/net/ipv6/seg6_hmac.c +++ b/net/ipv6/seg6_hmac.c @@ -35,6 +35,7 @@ #include #include +#include #include #include #include @@ -70,6 +71,12 @@ static const struct rhashtable_params rht_params = { .obj_cmpfn = seg6_hmac_cmpfn, }; +struct seg6_hmac_algo { + u8 alg_id; + char name[64]; + int crypto_pool_id; +}; + static struct seg6_hmac_algo hmac_algos[] = { { .alg_id = SEG6_HMAC_ALGO_SHA1, @@ -115,55 +122,17 @@ static struct seg6_hmac_algo *__hmac_get_algo(u8 alg_id) return NULL; } -static int __do_hmac(struct seg6_hmac_info *hinfo, const char *text, u8 psize, - u8 *output, int outlen) -{ - struct seg6_hmac_algo *algo; - struct crypto_shash *tfm; - struct shash_desc *shash; - int ret, dgsize; - - algo = __hmac_get_algo(hinfo->alg_id); - if (!algo) - return -ENOENT; - - tfm = *this_cpu_ptr(algo->tfms); - - dgsize = crypto_shash_digestsize(tfm); - if (dgsize > outlen) { - pr_debug("sr-ipv6: __do_hmac: digest size too big (%d / %d)\n", - dgsize, outlen); - return -ENOMEM; - } - - ret = crypto_shash_setkey(tfm, hinfo->secret, hinfo->slen); - if (ret < 0) { - pr_debug("sr-ipv6: crypto_shash_setkey failed: err %d\n", ret); - goto failed; - } - - shash = *this_cpu_ptr(algo->shashs); - shash->tfm = tfm; - - ret = crypto_shash_digest(shash, text, psize, output); - if (ret < 0) { - pr_debug("sr-ipv6: crypto_shash_digest failed: err %d\n", ret); - goto failed; - } - - return dgsize; - -failed: - return ret; -} - int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, struct in6_addr *saddr, u8 *output) { __be32 hmackeyid = cpu_to_be32(hinfo->hmackeyid); - u8 tmp_out[SEG6_HMAC_MAX_DIGESTSIZE]; + struct crypto_pool_ahash hp; + struct seg6_hmac_algo *algo; int plen, i, dgsize, wrsize; + struct crypto_ahash *tfm; + struct scatterlist sg; char *ring, *off; + int err; /* a 160-byte buffer for digest output allows to store highest known * hash function (RadioGatun) with up to 1216 bits @@ -176,6 +145,10 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, if (plen >= SEG6_HMAC_RING_SIZE) return -EMSGSIZE; + algo = __hmac_get_algo(hinfo->alg_id); + if (!algo) + return -ENOENT; + /* Let's build the HMAC text on the ring buffer. The text is composed * as follows, in order: * @@ -186,8 +159,36 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, * 5. All segments in the segments list (n * 128 bits) */ - local_bh_disable(); + err = crypto_pool_get(algo->crypto_pool_id, (struct crypto_pool *)&hp); + if (err) + return err; + ring = this_cpu_ptr(hmac_ring); + + sg_init_one(&sg, ring, plen); + + tfm = crypto_ahash_reqtfm(hp.req); + dgsize = crypto_ahash_digestsize(tfm); + if (dgsize > SEG6_HMAC_MAX_DIGESTSIZE) { + pr_debug("digest size too big (%d / %d)\n", + dgsize, SEG6_HMAC_MAX_DIGESTSIZE); + err = -ENOMEM; + goto err_put_pool; + } + + err = crypto_ahash_setkey(tfm, hinfo->secret, hinfo->slen); + if (err) { + pr_debug("crypto_ahash_setkey failed: err %d\n", err); + goto err_put_pool; + } + + err = crypto_ahash_init(hp.req); + if (err) + goto err_put_pool; + + ahash_request_set_crypt(hp.req, &sg, + hp.base.scratch, SEG6_HMAC_MAX_DIGESTSIZE); + off = ring; /* source address */ @@ -210,21 +211,25 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, off += 16; } - dgsize = __do_hmac(hinfo, ring, plen, tmp_out, - SEG6_HMAC_MAX_DIGESTSIZE); - local_bh_enable(); + err = crypto_ahash_update(hp.req); + if (err) + goto err_put_pool; - if (dgsize < 0) - return dgsize; + err = crypto_ahash_final(hp.req); + if (err) + goto err_put_pool; wrsize = SEG6_HMAC_FIELD_LEN; if (wrsize > dgsize) wrsize = dgsize; memset(output, 0, SEG6_HMAC_FIELD_LEN); - memcpy(output, tmp_out, wrsize); + memcpy(output, hp.base.scratch, wrsize); - return 0; +err_put_pool: + crypto_pool_put(); + + return err; } EXPORT_SYMBOL(seg6_hmac_compute); @@ -291,12 +296,24 @@ EXPORT_SYMBOL(seg6_hmac_info_lookup); int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hinfo) { struct seg6_pernet_data *sdata = seg6_pernet(net); - int err; + struct seg6_hmac_algo *algo; + int ret; - err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, + algo = __hmac_get_algo(hinfo->alg_id); + if (!algo) + return -ENOENT; + + ret = crypto_pool_alloc_ahash(algo->name); + if (ret < 0) + return ret; + algo->crypto_pool_id = ret; + + ret = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, rht_params); + if (ret) + crypto_pool_release(algo->crypto_pool_id); - return err; + return ret; } EXPORT_SYMBOL(seg6_hmac_info_add); @@ -304,6 +321,7 @@ int seg6_hmac_info_del(struct net *net, u32 key) { struct seg6_pernet_data *sdata = seg6_pernet(net); struct seg6_hmac_info *hinfo; + struct seg6_hmac_algo *algo; int err = -ENOENT; hinfo = rhashtable_lookup_fast(&sdata->hmac_infos, &key, rht_params); @@ -315,6 +333,12 @@ int seg6_hmac_info_del(struct net *net, u32 key) if (err) goto out; + algo = __hmac_get_algo(hinfo->alg_id); + if (algo) + crypto_pool_release(algo->crypto_pool_id); + else + WARN_ON_ONCE(1); + seg6_hinfo_release(hinfo); out: @@ -348,56 +372,9 @@ int seg6_push_hmac(struct net *net, struct in6_addr *saddr, } EXPORT_SYMBOL(seg6_push_hmac); -static int seg6_hmac_init_algo(void) -{ - struct seg6_hmac_algo *algo; - struct crypto_shash *tfm; - struct shash_desc *shash; - int i, alg_count, cpu; - - alg_count = ARRAY_SIZE(hmac_algos); - - for (i = 0; i < alg_count; i++) { - struct crypto_shash **p_tfm; - int shsize; - - algo = &hmac_algos[i]; - algo->tfms = alloc_percpu(struct crypto_shash *); - if (!algo->tfms) - return -ENOMEM; - - for_each_possible_cpu(cpu) { - tfm = crypto_alloc_shash(algo->name, 0, 0); - if (IS_ERR(tfm)) - return PTR_ERR(tfm); - p_tfm = per_cpu_ptr(algo->tfms, cpu); - *p_tfm = tfm; - } - - p_tfm = raw_cpu_ptr(algo->tfms); - tfm = *p_tfm; - - shsize = sizeof(*shash) + crypto_shash_descsize(tfm); - - algo->shashs = alloc_percpu(struct shash_desc *); - if (!algo->shashs) - return -ENOMEM; - - for_each_possible_cpu(cpu) { - shash = kzalloc_node(shsize, GFP_KERNEL, - cpu_to_node(cpu)); - if (!shash) - return -ENOMEM; - *per_cpu_ptr(algo->shashs, cpu) = shash; - } - } - - return 0; -} - int __init seg6_hmac_init(void) { - return seg6_hmac_init_algo(); + return crypto_pool_reserve_scratch(SEG6_HMAC_MAX_DIGESTSIZE); } int __net_init seg6_hmac_net_init(struct net *net) @@ -407,29 +384,6 @@ int __net_init seg6_hmac_net_init(struct net *net) return rhashtable_init(&sdata->hmac_infos, &rht_params); } -void seg6_hmac_exit(void) -{ - struct seg6_hmac_algo *algo = NULL; - int i, alg_count, cpu; - - alg_count = ARRAY_SIZE(hmac_algos); - for (i = 0; i < alg_count; i++) { - algo = &hmac_algos[i]; - for_each_possible_cpu(cpu) { - struct crypto_shash *tfm; - struct shash_desc *shash; - - shash = *per_cpu_ptr(algo->shashs, cpu); - kfree(shash); - tfm = *per_cpu_ptr(algo->tfms, cpu); - crypto_free_shash(tfm); - } - free_percpu(algo->tfms); - free_percpu(algo->shashs); - } -} -EXPORT_SYMBOL(seg6_hmac_exit); - void __net_exit seg6_hmac_net_exit(struct net *net) { struct seg6_pernet_data *sdata = seg6_pernet(net);