From patchwork Mon Mar 3 08:47:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869915 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 637A41EE021; Mon, 3 Mar 2025 08:47:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991653; cv=none; b=NL+SzHKy56ut+2ibhB9p3vk824+GIN/JEvduK13fhubiJQY9DKPLIH4kCI2j2u69r9biXBbWRr7jV/9XgfOI++XZ94/D9jHeXW0sBqX47Nck6kzGSZHPmJSkf/FLP5nM79LFdnplgvIXUJdnMmjo7QtV4iGYqqU7U34oJGmnf2g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991653; c=relaxed/simple; bh=5xvVgcGbsVS3XOQngdpBodjWDsRWeoD2A/h2JxKzioc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dlz4OJmbV3chztt1MVoMxMEtPeBovJRQvFS8w2salc1pqAG7CL9mZrZMQ8+PF7nTkv408p+tuMsHYQd+kBkUP0BT4bt4/tstqKASNRdBmMDI89YP117W1wvoHbl1YRMQ5U2NYyeg4xqiPOnjx+UqrvcAvIpEhO/AkaLPhNz6Dgk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eOhHjgNR; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eOhHjgNR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991651; x=1772527651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5xvVgcGbsVS3XOQngdpBodjWDsRWeoD2A/h2JxKzioc=; b=eOhHjgNRFATjH9g2UbCF6wrmLZrT+4OSCo00cK5CF0RO0Cc6X+g6FyIS lz5dqCDQi1i5fI5pNrsrkqkkWKuGFfkXJdcNFUSz/uJrNd2/rKOBTqr8g TeAxLLrNSi6Sok9FsmhWlh1r9MJQS8/2bGMEsISqVTikNdvvMUyk7fuB9 dV2NlMVddKRJbyMCsn2oadGiDRNkMcDXBinhjEBiYIOc/G+rJwdpMRc34 Ak/9i63GQLsXquZwIh1EeSdQ7BWAXypkbBVCu3eYvGJOsugG9GrvlGmCa OWhExjyrqGmmowephx53WpmhUkCxPa1YK/dGub1je2tvU2cW01gd9G129 w==; X-CSE-ConnectionGUID: X3BVB9NBTUKzgqvx2f5ENg== X-CSE-MsgGUID: Il6tbKQcS26Yaz/usXiIPg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111853" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111853" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:27 -0800 X-CSE-ConnectionGUID: pHHvrbHVTj2dg6QDahNk4Q== X-CSE-MsgGUID: aRS85qAtQeeozA4A+kzF9w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426761" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:25 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 01/14] crypto: acomp - Add synchronous/asynchronous acomp request chaining. Date: Mon, 3 Mar 2025 00:47:11 -0800 Message-Id: <20250303084724.6490-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch is based on Herbert Xu's request chaining for ahash ("[PATCH 2/6] crypto: hash - Add request chaining API") [1]. The generic framework for request chaining that's provided in the ahash implementation has been used as reference to develop a similar synchronous request chaining framework for crypto_acomp. Furthermore, this commit develops an asynchronous request chaining framework and API that iaa_crypto can use for request chaining with parallelism, in order to fully benefit from Intel IAA's multiple compress/decompress engines in hardware. This allows us to gain significant latency improvements with IAA batching as compared to synchronous request chaining. Usage of acomp request chaining API: ==================================== Any crypto_acomp compressor can avail of request chaining as follows: Step 1: Create request chain: Request 0 (the first req in the chain): void acomp_reqchain_init(struct acomp_req *req, u32 flags, crypto_completion_t compl, void *data); Subsequent requests: void acomp_request_chain(struct acomp_req *req, struct acomp_req *head); Step 2: Process the request chain using the specified compress/decompress "op": 2.a) Synchronous: the chain of requests is processed in series: int acomp_do_req_chain(struct acomp_req *req, int (*op)(struct acomp_req *req)); 2.b) Asynchronous: the chain of requests is processed in parallel using a submit-poll paradigm: int acomp_do_async_req_chain(struct acomp_req *req, int (*op_submit)(struct acomp_req *req), int (*op_poll)(struct acomp_req *req)); Request chaining will be used in subsequent patches to implement compress/decompress batching in the iaa_crypto driver for the two supported IAA driver sync_modes: sync_mode = 'sync' will use (2.a), sync_mode = 'async' will use (2.b). These files are directly re-used from [1] which is not yet merged: include/crypto/algapi.h include/linux/crypto.h Hence, I am adding Herbert as the co-developer of this acomp request chaining patch. [1]: https://lore.kernel.org/linux-crypto/677614fbdc70b31df2e26483c8d2cd1510c8af91.1730021644.git.herbert@gondor.apana.org.au/ Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar Co-developed-by: Herbert Xu Signed-off-by: --- crypto/acompress.c | 284 ++++++++++++++++++++++++++++ include/crypto/acompress.h | 46 +++++ include/crypto/algapi.h | 10 + include/crypto/internal/acompress.h | 10 + include/linux/crypto.h | 39 ++++ 5 files changed, 389 insertions(+) diff --git a/crypto/acompress.c b/crypto/acompress.c index 6fdf0ff9f3c0..cb6444d09dd7 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -23,6 +23,19 @@ struct crypto_scomp; static const struct crypto_type crypto_acomp_type; +struct acomp_save_req_state { + struct list_head head; + struct acomp_req *req0; + struct acomp_req *cur; + int (*op)(struct acomp_req *req); + crypto_completion_t compl; + void *data; +}; + +static void acomp_reqchain_done(void *data, int err); +static int acomp_save_req(struct acomp_req *req, crypto_completion_t cplt); +static void acomp_restore_req(struct acomp_req *req); + static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg) { return container_of(alg, struct acomp_alg, calg.base); @@ -123,6 +136,277 @@ struct crypto_acomp *crypto_alloc_acomp_node(const char *alg_name, u32 type, } EXPORT_SYMBOL_GPL(crypto_alloc_acomp_node); +static int acomp_save_req(struct acomp_req *req, crypto_completion_t cplt) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct acomp_save_req_state *state; + gfp_t gfp; + u32 flags; + + if (!acomp_is_async(tfm)) + return 0; + + flags = acomp_request_flags(req); + gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; + state = kmalloc(sizeof(*state), gfp); + if (!state) + return -ENOMEM; + + state->compl = req->base.complete; + state->data = req->base.data; + state->req0 = req; + + req->base.complete = cplt; + req->base.data = state; + + return 0; +} + +static void acomp_restore_req(struct acomp_req *req) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct acomp_save_req_state *state; + + if (!acomp_is_async(tfm)) + return; + + state = req->base.data; + + req->base.complete = state->compl; + req->base.data = state->data; + kfree(state); +} + +static int acomp_reqchain_finish(struct acomp_save_req_state *state, + int err, u32 mask) +{ + struct acomp_req *req0 = state->req0; + struct acomp_req *req = state->cur; + struct acomp_req *n; + + req->base.err = err; + + if (req == req0) + INIT_LIST_HEAD(&req->base.list); + else + list_add_tail(&req->base.list, &req0->base.list); + + list_for_each_entry_safe(req, n, &state->head, base.list) { + list_del_init(&req->base.list); + + req->base.flags &= mask; + req->base.complete = acomp_reqchain_done; + req->base.data = state; + state->cur = req; + err = state->op(req); + + if (err == -EINPROGRESS) { + if (!list_empty(&state->head)) + err = -EBUSY; + goto out; + } + + if (err == -EBUSY) + goto out; + + req->base.err = err; + list_add_tail(&req->base.list, &req0->base.list); + } + + acomp_restore_req(req0); + +out: + return err; +} + +static void acomp_reqchain_done(void *data, int err) +{ + struct acomp_save_req_state *state = data; + crypto_completion_t compl = state->compl; + + data = state->data; + + if (err == -EINPROGRESS) { + if (!list_empty(&state->head)) + return; + goto notify; + } + + err = acomp_reqchain_finish(state, err, CRYPTO_TFM_REQ_MAY_BACKLOG); + if (err == -EBUSY) + return; + +notify: + compl(data, err); +} + +int acomp_do_req_chain(struct acomp_req *req, + int (*op)(struct acomp_req *req)) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct acomp_save_req_state *state; + struct acomp_save_req_state state0; + int err = 0; + + if (!acomp_request_chained(req) || list_empty(&req->base.list) || + !crypto_acomp_req_chain(tfm)) + return op(req); + + state = &state0; + + if (acomp_is_async(tfm)) { + err = acomp_save_req(req, acomp_reqchain_done); + if (err) { + struct acomp_req *r2; + + req->base.err = err; + list_for_each_entry(r2, &req->base.list, base.list) + r2->base.err = err; + + return err; + } + + state = req->base.data; + } + + state->op = op; + state->cur = req; + INIT_LIST_HEAD(&state->head); + list_splice(&req->base.list, &state->head); + + err = op(req); + if (err == -EBUSY || err == -EINPROGRESS) + return -EBUSY; + + return acomp_reqchain_finish(state, err, ~0); +} +EXPORT_SYMBOL_GPL(acomp_do_req_chain); + +static void acomp_async_reqchain_done(struct acomp_req *req0, + struct list_head *state, + int (*op_poll)(struct acomp_req *req)) +{ + struct acomp_req *req, *n; + bool req0_done = false; + int err; + + while (!list_empty(state)) { + + if (!req0_done) { + err = op_poll(req0); + if (!(err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY)) { + req0->base.err = err; + req0_done = true; + } + } + + list_for_each_entry_safe(req, n, state, base.list) { + err = op_poll(req); + + if (err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY) + continue; + + req->base.err = err; + list_del_init(&req->base.list); + list_add_tail(&req->base.list, &req0->base.list); + } + } + + while (!req0_done) { + err = op_poll(req0); + if (!(err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY)) { + req0->base.err = err; + break; + } + } +} + +static int acomp_async_reqchain_finish(struct acomp_req *req0, + struct list_head *state, + int (*op_submit)(struct acomp_req *req), + int (*op_poll)(struct acomp_req *req)) +{ + struct acomp_req *req, *n; + int err = 0; + + INIT_LIST_HEAD(&req0->base.list); + + list_for_each_entry_safe(req, n, state, base.list) { + BUG_ON(req == req0); + + err = op_submit(req); + + if (!(err == -EINPROGRESS || err == -EBUSY)) { + req->base.err = err; + list_del_init(&req->base.list); + list_add_tail(&req->base.list, &req0->base.list); + } + } + + acomp_async_reqchain_done(req0, state, op_poll); + + return req0->base.err; +} + +int acomp_do_async_req_chain(struct acomp_req *req, + int (*op_submit)(struct acomp_req *req), + int (*op_poll)(struct acomp_req *req)) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct list_head state; + struct acomp_req *r2; + int err = 0; + void *req0_data = req->base.data; + + if (!acomp_request_chained(req) || list_empty(&req->base.list) || + !acomp_is_async(tfm) || !crypto_acomp_req_chain(tfm)) { + + err = op_submit(req); + + if (err == -EINPROGRESS || err == -EBUSY) { + bool req0_done = false; + + while (!req0_done) { + err = op_poll(req); + if (!(err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY)) { + req->base.err = err; + break; + } + } + } else { + req->base.err = err; + } + + req->base.data = req0_data; + if (acomp_is_async(tfm)) + req->base.complete(req->base.data, req->base.err); + + return err; + } + + err = op_submit(req); + req->base.err = err; + + if (err && !(err == -EINPROGRESS || err == -EBUSY)) + goto err_prop; + + INIT_LIST_HEAD(&state); + list_splice(&req->base.list, &state); + + err = acomp_async_reqchain_finish(req, &state, op_submit, op_poll); + req->base.data = req0_data; + req->base.complete(req->base.data, req->base.err); + + return err; + +err_prop: + list_for_each_entry(r2, &req->base.list, base.list) + r2->base.err = err; + + return err; +} +EXPORT_SYMBOL_GPL(acomp_do_async_req_chain); + struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp); diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 54937b615239..e6783deba3ac 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -206,6 +206,7 @@ static inline void acomp_request_set_callback(struct acomp_req *req, req->base.data = data; req->base.flags &= CRYPTO_ACOMP_ALLOC_OUTPUT; req->base.flags |= flgs & ~CRYPTO_ACOMP_ALLOC_OUTPUT; + req->base.flags &= ~CRYPTO_TFM_REQ_CHAIN; } /** @@ -237,6 +238,51 @@ static inline void acomp_request_set_params(struct acomp_req *req, req->flags |= CRYPTO_ACOMP_ALLOC_OUTPUT; } +static inline u32 acomp_request_flags(struct acomp_req *req) +{ + return req->base.flags; +} + +static inline void acomp_reqchain_init(struct acomp_req *req, + u32 flags, crypto_completion_t compl, + void *data) +{ + acomp_request_set_callback(req, flags, compl, data); + crypto_reqchain_init(&req->base); +} + +static inline bool acomp_is_reqchain(struct acomp_req *req) +{ + return crypto_is_reqchain(&req->base); +} + +static inline void acomp_reqchain_clear(struct acomp_req *req, void *data) +{ + struct crypto_wait *wait = (struct crypto_wait *)data; + reinit_completion(&wait->completion); + crypto_reqchain_clear(&req->base); + acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, data); +} + +static inline void acomp_request_chain(struct acomp_req *req, + struct acomp_req *head) +{ + crypto_request_chain(&req->base, &head->base); +} + +int acomp_do_req_chain(struct acomp_req *req, + int (*op)(struct acomp_req *req)); + +int acomp_do_async_req_chain(struct acomp_req *req, + int (*op_submit)(struct acomp_req *req), + int (*op_poll)(struct acomp_req *req)); + +static inline int acomp_request_err(struct acomp_req *req) +{ + return req->base.err; +} + /** * crypto_acomp_compress() -- Invoke asynchronous compress operation * diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 156de41ca760..c5df380c7d08 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -271,4 +271,14 @@ static inline u32 crypto_tfm_alg_type(struct crypto_tfm *tfm) return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK; } +static inline bool crypto_request_chained(struct crypto_async_request *req) +{ + return req->flags & CRYPTO_TFM_REQ_CHAIN; +} + +static inline bool crypto_tfm_req_chain(struct crypto_tfm *tfm) +{ + return tfm->__crt_alg->cra_flags & CRYPTO_ALG_REQ_CHAIN; +} + #endif /* _CRYPTO_ALGAPI_H */ diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h index 8831edaafc05..53b4ef59b48c 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -84,6 +84,16 @@ static inline void __acomp_request_free(struct acomp_req *req) kfree_sensitive(req); } +static inline bool acomp_request_chained(struct acomp_req *req) +{ + return crypto_request_chained(&req->base); +} + +static inline bool crypto_acomp_req_chain(struct crypto_acomp *tfm) +{ + return crypto_tfm_req_chain(&tfm->base); +} + /** * crypto_register_acomp() -- Register asynchronous compression algorithm * diff --git a/include/linux/crypto.h b/include/linux/crypto.h index b164da5e129e..f1bc282e1ed6 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -13,6 +13,8 @@ #define _LINUX_CRYPTO_H #include +#include +#include #include #include #include @@ -124,6 +126,9 @@ */ #define CRYPTO_ALG_FIPS_INTERNAL 0x00020000 +/* Set if the algorithm supports request chains. */ +#define CRYPTO_ALG_REQ_CHAIN 0x00040000 + /* * Transform masks and values (for crt_flags). */ @@ -133,6 +138,7 @@ #define CRYPTO_TFM_REQ_FORBID_WEAK_KEYS 0x00000100 #define CRYPTO_TFM_REQ_MAY_SLEEP 0x00000200 #define CRYPTO_TFM_REQ_MAY_BACKLOG 0x00000400 +#define CRYPTO_TFM_REQ_CHAIN 0x00000800 /* * Miscellaneous stuff. @@ -174,6 +180,7 @@ struct crypto_async_request { struct crypto_tfm *tfm; u32 flags; + int err; }; /** @@ -391,6 +398,9 @@ void crypto_req_done(void *req, int err); static inline int crypto_wait_req(int err, struct crypto_wait *wait) { + if (!wait) + return err; + switch (err) { case -EINPROGRESS: case -EBUSY: @@ -540,5 +550,34 @@ int crypto_comp_decompress(struct crypto_comp *tfm, const u8 *src, unsigned int slen, u8 *dst, unsigned int *dlen); +static inline void crypto_reqchain_init(struct crypto_async_request *req) +{ + req->err = -EINPROGRESS; + req->flags |= CRYPTO_TFM_REQ_CHAIN; + INIT_LIST_HEAD(&req->list); +} + +static inline bool crypto_is_reqchain(struct crypto_async_request *req) +{ + return req->flags & CRYPTO_TFM_REQ_CHAIN; +} + +static inline void crypto_reqchain_clear(struct crypto_async_request *req) +{ + req->flags &= ~CRYPTO_TFM_REQ_CHAIN; +} + +static inline void crypto_request_chain(struct crypto_async_request *req, + struct crypto_async_request *head) +{ + req->err = -EINPROGRESS; + list_add_tail(&req->list, &head->list); +} + +static inline bool crypto_tfm_is_async(struct crypto_tfm *tfm) +{ + return tfm->__crt_alg->cra_flags & CRYPTO_ALG_ASYNC; +} + #endif /* _LINUX_CRYPTO_H */ From patchwork Mon Mar 3 08:47:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869914 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF0BC1EE00F; Mon, 3 Mar 2025 08:47:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991654; cv=none; b=e5AuKdHOJgFVLzcQIDtTK/iiZXp1bHEFwwiluLIySDcABfASFFESXmXjGDxsMHJUhqTSlRdu7e/yvSXQoQQVW6LFrgMrONlTk5O8jkczJ3vE+n2FOo4iTiyFeIqnOMQ/mGYv92AY20/ys3rwRuFYfPKl/rCFUBptStSzQ7Yp8IU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991654; c=relaxed/simple; bh=8AO2hJSnLaCRd/8HtMcdHOYwB/zSpBOSifH5FW9w0Ms=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JD4aeToMkDrhgxT6FCVdYBxdAMWAWoo+ARpT2QXoYcnpVeO4GcShktLUN/O7aonXnAh+EItv0ViaBXtmsCkIvjn1hSL7pE3a4sA4EfKEp3iGU+skVTJAWOZxipaUXoUL9AW7q5Zd/tJML6L7Q9nFmsM1Ivn5ok+xF3WH6xPoELQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gan4tHSV; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gan4tHSV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991653; x=1772527653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8AO2hJSnLaCRd/8HtMcdHOYwB/zSpBOSifH5FW9w0Ms=; b=gan4tHSVnCuj8lqnKmX5E/1n3my8hT0Zw5/bADXPkrUryoDavj6WLCOI mbVa9tmuKD0GaSJC2veKaU6DsbhzFNJHyk/RDIgTbfVA4x59f5h1luenD XjezXiC46RX1JLtJU/8werS9emAtE336PO96o8kuyGeY/9Hm1lGJVE8XW xao8UyK9xbXGxZBtkTglN9iyufUA4ath69zbLrgNZSLA7iGR0/gHo6YIB VT+Htil/Qe2WB3GnYybofwUHb5a7HvCibiVtsohmBybK0dubU+bP+9JXf RZSlvXMU5+HA3wxvjf2k+ZEc1x8ohO+KrAl+75C5k2LSLsRrVbbzEddQ9 A==; X-CSE-ConnectionGUID: KmQn2fHnSh+jqrjvkhzpiQ== X-CSE-MsgGUID: QjfLb+f3TPqFcO0lrNIlTA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111881" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111881" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:29 -0800 X-CSE-ConnectionGUID: SLQFcck9SfqnDM3/EV2iXg== X-CSE-MsgGUID: STpS0GyCTU2r/TCFjJpTcg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426771" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:27 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 03/14] crypto: iaa - Add an acomp_req flag CRYPTO_ACOMP_REQ_POLL to enable async mode. Date: Mon, 3 Mar 2025 00:47:13 -0800 Message-Id: <20250303084724.6490-4-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the iaa_crypto driver has async_mode set to true, and use_irq set to false, it can still be forced to use synchronous mode by turning off the CRYPTO_ACOMP_REQ_POLL flag in req->flags. In other words, all three of the following need to be true for a request to be processed in fully async poll mode: 1) async_mode should be "true" 2) use_irq should be "false" 3) req->flags & CRYPTO_ACOMP_REQ_POLL should be "true" Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 11 ++++++++++- include/crypto/acompress.h | 5 +++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index c3776b0de51d..d7983ab3c34a 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1520,6 +1520,10 @@ static int iaa_comp_acompress(struct acomp_req *req) return -EINVAL; } + /* If the caller has requested no polling, disable async. */ + if (!(req->flags & CRYPTO_ACOMP_REQ_POLL)) + disable_async = true; + cpu = get_cpu(); wq = wq_table_next_wq(cpu); put_cpu(); @@ -1712,6 +1716,7 @@ static int iaa_comp_adecompress(struct acomp_req *req) { struct crypto_tfm *tfm = req->base.tfm; dma_addr_t src_addr, dst_addr; + bool disable_async = false; int nr_sgs, cpu, ret = 0; struct iaa_wq *iaa_wq; struct device *dev; @@ -1727,6 +1732,10 @@ static int iaa_comp_adecompress(struct acomp_req *req) return -EINVAL; } + /* If the caller has requested no polling, disable async. */ + if (!(req->flags & CRYPTO_ACOMP_REQ_POLL)) + disable_async = true; + if (!req->dst) return iaa_comp_adecompress_alloc_dest(req); @@ -1775,7 +1784,7 @@ static int iaa_comp_adecompress(struct acomp_req *req) req->dst, req->dlen, sg_dma_len(req->dst)); ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, - dst_addr, &req->dlen, false); + dst_addr, &req->dlen, disable_async); if (ret == -EINPROGRESS) return ret; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 147f184b6bea..afadf84f236d 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -14,6 +14,11 @@ #include #define CRYPTO_ACOMP_ALLOC_OUTPUT 0x00000001 +/* + * If set, the driver must have a way to submit the req, then + * poll its completion status for success/error. + */ +#define CRYPTO_ACOMP_REQ_POLL 0x00000002 #define CRYPTO_ACOMP_DST_MAX 131072 /** From patchwork Mon Mar 3 08:47:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869913 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8300F1F03C9; Mon, 3 Mar 2025 08:47:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991655; cv=none; b=gHNLQ8m/tG2mqNmXsBFI+mnIk47cjcOBQgm+Q0Whu5R0l2JuvyNCsX4QohbHI/e27dbNCgyirGgJFl580ULjsf4pUdSAlrXBEECOaC0RJeMA2sukZSXt/MxF34viK3lGcPP16iNSJo1m8ZaXJ+sdRy6/RO+E34EzbIoUFSm8XJY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991655; c=relaxed/simple; bh=yAxiyAJkmOi+qCLJ6YIryDvvFkW0bqE3LR6f2KKFmuI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hzEISQ7JYoBDLFFwwiflND0jjLEl7Ejgp+Pe2FYczkPHUU/4TMTLi/kHqXxclNhsotXKeZ+DIcIgRgm8IqDkMFHHFeKcVecEQbnRIrBMlMM/DWHW/VZmIOrz5FSYOdEjYUVLXlG4lcJhptg5yvn25VvGvDvFgyxnSi2ezJ1wwUE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Q5Jo9FVH; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Q5Jo9FVH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991654; x=1772527654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yAxiyAJkmOi+qCLJ6YIryDvvFkW0bqE3LR6f2KKFmuI=; b=Q5Jo9FVH4DINRiUBVDZfUge+i/MDwZedto2nJEqznbsy3K0Pp5wQlQ9g SoZ27z4jPKSUGaDeArAlLrBDRj/WcxOsb7GC5RYZ8Mw5LDteSZZEg5bhB MOKE4DXidpketSk+K8MVF+ElCbMPEXhPlz+5hVYcOT8lxdeQJjQuXEesu 7dIoJkXUqWB+5FoyO5YTqA/IxUQ7uTJoLXW8y5wEiAKiRkZmYtutRzAzA VGYtAarFVSVCYlG/QSE5Y34Ahk8k1sOk7E7CQaSQ8SL8e1r5q/i0Hi7ta yausb0H2vbFdoqXmMN6k9MQNGSE8CfcPBmyu4XMBbLBYbNmoeOs4w0Quq g==; X-CSE-ConnectionGUID: 0y2kQFlqTvG9vuo1sdENdw== X-CSE-MsgGUID: xzFcGJDbSDWYH2hqXLkoBA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111894" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111894" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:31 -0800 X-CSE-ConnectionGUID: fvBsJymEQP6lwEgEmf6goA== X-CSE-MsgGUID: RpkS3VTXRlGfrTvv0qcbqg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426786" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 04/14] crypto: iaa - Implement batch compression/decompression with request chaining. Date: Mon, 3 Mar 2025 00:47:14 -0800 Message-Id: <20250303084724.6490-5-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch provides the iaa_crypto driver implementation for the newly added crypto_acomp "get_batch_size()" interface which will be called when swap modules invoke crypto_acomp_batch_size() to query for the maximum batch size. This will return an iaa_driver specific constant, IAA_CRYPTO_MAX_BATCH_SIZE (set to 8U currently). This allows swap modules such as zswap/zram to allocate required batching resources and then invoke fully asynchronous batch parallel compression/decompression of pages on systems with Intel IAA, by setting up a request chain, and calling crypto_acomp_compress() or crypto_acomp_decompress() with the head request in the chain. This enables zswap compress batching code to be developed in a manner similar to the current single-page synchronous calls to crypto_acomp_compress() and crypto_acomp_decompress(), thereby, facilitating encapsulated and modular hand-off between the kernel zswap/zram code and the crypto_acomp layer. This patch also provides implementations of IAA batching with request chaining for both iaa_crypto sync modes: asynchronous/no-irq and fully synchronous. Since iaa_crypto supports the use of acomp request chaining, this patch also adds CRYPTO_ALG_REQ_CHAIN to the iaa_acomp_fixed_deflate algorithm's cra_flags. Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 9 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 186 ++++++++++++++++++++- 2 files changed, 192 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 56985e395263..45d94a646636 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -39,6 +39,15 @@ IAA_DECOMP_CHECK_FOR_EOB | \ IAA_DECOMP_STOP_ON_EOB) +/* + * The maximum compress/decompress batch size for IAA's implementation of + * batched compressions/decompressions. + * The IAA compression algorithms should provide the crypto_acomp + * get_batch_size() interface through a function that returns this + * constant. + */ +#define IAA_CRYPTO_MAX_BATCH_SIZE 8U + /* Representation of IAA workqueue */ struct iaa_wq { struct list_head list; diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index d7983ab3c34a..a9800b8f3575 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1807,6 +1807,185 @@ static void compression_ctx_init(struct iaa_compression_ctx *ctx) ctx->use_irq = use_irq; } +static int iaa_comp_poll(struct acomp_req *req) +{ + struct idxd_desc *idxd_desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + struct idxd_wq *wq; + bool compress_op; + int ret; + + idxd_desc = req->base.data; + if (!idxd_desc) + return -EAGAIN; + + compress_op = (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS); + wq = idxd_desc->wq; + iaa_wq = idxd_wq_get_private(wq); + idxd = iaa_wq->iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + ret = check_completion(dev, idxd_desc->iax_completion, true, true); + if (ret == -EAGAIN) + return ret; + if (ret) + goto out; + + req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + if (compress_op) { + update_total_comp_bytes_out(req->dlen); + update_wq_comp_bytes(wq, req->dlen); + } else { + update_total_decomp_bytes_in(req->slen); + update_wq_decomp_bytes(wq, req->slen); + } + + if (iaa_verify_compress && (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS)) { + struct crypto_tfm *tfm = req->base.tfm; + dma_addr_t src_addr, dst_addr; + u32 compression_crc; + + compression_crc = idxd_desc->iax_completion->crc; + + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE); + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE); + + src_addr = sg_dma_address(req->src); + dst_addr = sg_dma_address(req->dst); + + ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen, + dst_addr, &req->dlen, compression_crc); + } +out: + /* caller doesn't call crypto_wait_req, so no acomp_request_complete() */ + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + idxd_free_desc(idxd_desc->wq, idxd_desc); + + dev_dbg(dev, "%s: returning ret=%d\n", __func__, ret); + + return ret; +} + +static unsigned int iaa_comp_get_batch_size(void) +{ + return IAA_CRYPTO_MAX_BATCH_SIZE; +} + +static void iaa_set_reqchain_poll( + struct acomp_req *req0, + bool set_flag) +{ + struct acomp_req *req; + + set_flag ? (req0->flags |= CRYPTO_ACOMP_REQ_POLL) : + (req0->flags &= ~CRYPTO_ACOMP_REQ_POLL); + + list_for_each_entry(req, &req0->base.list, base.list) + set_flag ? (req->flags |= CRYPTO_ACOMP_REQ_POLL) : + (req->flags &= ~CRYPTO_ACOMP_REQ_POLL); +} + +/** + * This API provides IAA compress batching functionality for use by swap + * modules. Batching is implemented using request chaining. + * + * @req: The head asynchronous compress request in the chain. + * + * Returns the compression error status (0 or -errno) of the last + * request that finishes. Caller should call acomp_request_err() + * for each request in the chain, to get its error status. + */ +static int iaa_comp_acompress_batch(struct acomp_req *req) +{ + bool async = (async_mode && !use_irq); + int err = 0; + + if (likely(async)) + iaa_set_reqchain_poll(req, true); + else + iaa_set_reqchain_poll(req, false); + + + if (likely(async)) + /* Process the request chain in parallel. */ + err = acomp_do_async_req_chain(req, iaa_comp_acompress, iaa_comp_poll); + else + /* Process the request chain in series. */ + err = acomp_do_req_chain(req, iaa_comp_acompress); + + /* + * For the same request chain to be usable by + * iaa_comp_acompress()/iaa_comp_adecompress() in synchronous mode, + * clear the CRYPTO_ACOMP_REQ_POLL bit on all acomp_reqs. + */ + iaa_set_reqchain_poll(req, false); + + return err; +} + +/** + * This API provides IAA decompress batching functionality for use by swap + * modules. Batching is implemented using request chaining. + * + * @req: The head asynchronous decompress request in the chain. + * + * Returns the decompression error status (0 or -errno) of the last + * request that finishes. Caller should call acomp_request_err() + * for each request in the chain, to get its error status. + */ +static int iaa_comp_adecompress_batch(struct acomp_req *req) +{ + bool async = (async_mode && !use_irq); + int err = 0; + + if (likely(async)) + iaa_set_reqchain_poll(req, true); + else + iaa_set_reqchain_poll(req, false); + + + if (likely(async)) + /* Process the request chain in parallel. */ + err = acomp_do_async_req_chain(req, iaa_comp_adecompress, iaa_comp_poll); + else + /* Process the request chain in series. */ + err = acomp_do_req_chain(req, iaa_comp_adecompress); + + /* + * For the same request chain to be usable by + * iaa_comp_acompress()/iaa_comp_adecompress() in synchronous mode, + * clear the CRYPTO_ACOMP_REQ_POLL bit on all acomp_reqs. + */ + iaa_set_reqchain_poll(req, false); + + return err; +} + +static int iaa_compress_main(struct acomp_req *req) +{ + if (acomp_is_reqchain(req)) + return iaa_comp_acompress_batch(req); + + return iaa_comp_acompress(req); +} + +static int iaa_decompress_main(struct acomp_req *req) +{ + if (acomp_is_reqchain(req)) + return iaa_comp_adecompress_batch(req); + + return iaa_comp_adecompress(req); +} + static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); @@ -1829,13 +2008,14 @@ static void dst_free(struct scatterlist *sgl) static struct acomp_alg iaa_acomp_fixed_deflate = { .init = iaa_comp_init_fixed, - .compress = iaa_comp_acompress, - .decompress = iaa_comp_adecompress, + .compress = iaa_compress_main, + .decompress = iaa_decompress_main, .dst_free = dst_free, + .get_batch_size = iaa_comp_get_batch_size, .base = { .cra_name = "deflate", .cra_driver_name = "deflate-iaa", - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_CHAIN, .cra_ctxsize = sizeof(struct iaa_compression_ctx), .cra_module = THIS_MODULE, .cra_priority = IAA_ALG_PRIORITY, From patchwork Mon Mar 3 08:47:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869912 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 250BA1F0E38; Mon, 3 Mar 2025 08:47:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991658; cv=none; b=ENBZ2OexNZ1eMyofiZaGjWfCRRM1Ljv7NI94BGI0kKH9yuXQfLDMY9BhEpSF5ISM/wWDAhtz9JZsTm9DbM+dSMpB8uhMYnlX7k+OO/fIszgxPOey+xvJtic31JpGo6JpAndnxuP/PrZm2fs4OQdqESyQYansWpVrSS3mtT+119k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991658; c=relaxed/simple; bh=VUwt0gNvg+/EAq7f9Vl2avLoGXxhNLtj5vjvAXStlZ8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cZnMuUKCBQ3Y5dIZU3OyKl+ack9KRSqkJmTS+l9Zp2cS2I9hgd35DoNjNocAKip0vGmQ1x65/dZXb6GVGz98TDqg+Au/5nOkX3b1HvMMM+9OorJkRDATVW3UUhNN/uob/cXwnJFQs3ZnTIGqPgs4+mLEv+ybzO08FF6gadWI0MA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OHsZUco3; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OHsZUco3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991656; x=1772527656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VUwt0gNvg+/EAq7f9Vl2avLoGXxhNLtj5vjvAXStlZ8=; b=OHsZUco3ZCbhHyinL5r7GKAgDddZBWtrtLIV1nbewUAvGCOsW8t71Wgt haTImIlP+TtQ4Oi/gDvk2fxSFCbfx1Q4uyK97ogUUDiUVSjFccDdASLn8 wPjrZgWPucLNjtbk6TaHptI6cke3RL7NMlw+hXcHBt3DnqgJh2kNY79K+ yyuQMWMshSaXSEaQjHUymTnZ/v/EGgz2uD1nxlkR4hnlL32OO2mV+oMp4 /zaqLBIZKJ4wyrlcJMu5oGgukCsck64vLWj6QiJNBkZe2FqIIqs4iEqeY wu3mupW2ehSIgcwp9f/bHtM6YS7QGNkB2FVcldCVrYYfkpf+BYKRAlXNG w==; X-CSE-ConnectionGUID: c4wP1siGTGem3AidiDU3Zg== X-CSE-MsgGUID: 0fmlfK9DSGO3AGRuonRVew== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111950" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111950" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:34 -0800 X-CSE-ConnectionGUID: A+H8ry80QeeAZgmroDx10A== X-CSE-MsgGUID: YBM8FY6HRxy+tST3TD0EQQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426806" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:33 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 08/14] crypto: iaa - Map IAA devices/wqs to cores based on packages instead of NUMA. Date: Mon, 3 Mar 2025 00:47:18 -0800 Message-Id: <20250303084724.6490-9-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch modifies the algorithm for mapping available IAA devices and wqs to cores, as they are being discovered, based on packages instead of NUMA nodes. This leads to a more realistic mapping of IAA devices as compression/decompression resources for a package, rather than for a NUMA node. This also resolves problems that were observed during internal validation on Intel platforms with many more NUMA nodes than packages: for such cases, the earlier NUMA based allocation caused some IAAs to be over-subscribed and some to not be utilized at all. As a result of this change from NUMA to packages, some of the core functions used by the iaa_crypto driver's "probe" and "remove" API have been re-written. The new infrastructure maintains a static/global mapping of "local wqs" per IAA device, in the "struct iaa_device" itself. The earlier implementation would allocate memory per-cpu for this data, which never changes once the IAA devices/wqs have been initialized. Two main outcomes from this new iaa_crypto driver infrastructure are: 1) Resolves "task blocked for more than x seconds" errors observed during internal validation on Intel systems with the earlier NUMA node based mappings, which was root-caused to the non-optimal IAA-to-core mappings described earlier. 2) Results in a NUM_THREADS factor reduction in memory footprint cost of initializing IAA devices/wqs, due to eliminating the per-cpu copies of each IAA device's wqs. On a 384 cores Intel Granite Rapids server with 8 IAA devices, this saves 140MiB. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 17 +- drivers/crypto/intel/iaa/iaa_crypto_main.c | 276 ++++++++++++--------- 2 files changed, 171 insertions(+), 122 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 45d94a646636..72ffdf55f7b3 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -55,6 +55,7 @@ struct iaa_wq { struct idxd_wq *wq; int ref; bool remove; + bool mapped; struct iaa_device *iaa_device; @@ -72,6 +73,13 @@ struct iaa_device_compression_mode { dma_addr_t aecs_comp_table_dma_addr; }; +struct wq_table_entry { + struct idxd_wq **wqs; + int max_wqs; + int n_wqs; + int cur_wq; +}; + /* Representation of IAA device with wqs, populated by probe */ struct iaa_device { struct list_head list; @@ -82,19 +90,14 @@ struct iaa_device { int n_wq; struct list_head wqs; + struct wq_table_entry *iaa_local_wqs; + atomic64_t comp_calls; atomic64_t comp_bytes; atomic64_t decomp_calls; atomic64_t decomp_bytes; }; -struct wq_table_entry { - struct idxd_wq **wqs; - int max_wqs; - int n_wqs; - int cur_wq; -}; - #define IAA_AECS_ALIGN 32 /* diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index abaee160e5ec..40751d7c83c0 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -30,8 +30,9 @@ /* number of iaa instances probed */ static unsigned int nr_iaa; static unsigned int nr_cpus; -static unsigned int nr_nodes; -static unsigned int nr_cpus_per_node; +static unsigned int nr_packages; +static unsigned int nr_cpus_per_package; +static unsigned int nr_iaa_per_package; /* Number of physical cpus sharing each iaa instance */ static unsigned int cpus_per_iaa; @@ -462,17 +463,46 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) * Functions for use in crypto probe and remove interfaces: * allocate/init/query/deallocate devices/wqs. ***********************************************************/ -static struct iaa_device *iaa_device_alloc(void) +static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) { + struct wq_table_entry *local; struct iaa_device *iaa_device; iaa_device = kzalloc(sizeof(*iaa_device), GFP_KERNEL); if (!iaa_device) - return NULL; + goto err; + + iaa_device->idxd = idxd; + + /* IAA device's local wqs. */ + iaa_device->iaa_local_wqs = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + if (!iaa_device->iaa_local_wqs) + goto err; + + local = iaa_device->iaa_local_wqs; + + local->wqs = kzalloc(iaa_device->idxd->max_wqs * sizeof(struct wq *), GFP_KERNEL); + if (!local->wqs) + goto err; + + local->max_wqs = iaa_device->idxd->max_wqs; + local->n_wqs = 0; INIT_LIST_HEAD(&iaa_device->wqs); return iaa_device; + +err: + if (iaa_device) { + if (iaa_device->iaa_local_wqs) { + if (iaa_device->iaa_local_wqs->wqs) + kfree(iaa_device->iaa_local_wqs->wqs); + kfree(iaa_device->iaa_local_wqs); + } + kfree(iaa_device); + } + + return NULL; } static bool iaa_has_wq(struct iaa_device *iaa_device, struct idxd_wq *wq) @@ -491,12 +521,10 @@ static struct iaa_device *add_iaa_device(struct idxd_device *idxd) { struct iaa_device *iaa_device; - iaa_device = iaa_device_alloc(); + iaa_device = iaa_device_alloc(idxd); if (!iaa_device) return NULL; - iaa_device->idxd = idxd; - list_add_tail(&iaa_device->list, &iaa_devices); nr_iaa++; @@ -537,6 +565,7 @@ static int add_iaa_wq(struct iaa_device *iaa_device, struct idxd_wq *wq, iaa_wq->wq = wq; iaa_wq->iaa_device = iaa_device; idxd_wq_set_private(wq, iaa_wq); + iaa_wq->mapped = false; list_add_tail(&iaa_wq->list, &iaa_device->wqs); @@ -580,6 +609,13 @@ static void free_iaa_device(struct iaa_device *iaa_device) return; remove_device_compression_modes(iaa_device); + + if (iaa_device->iaa_local_wqs) { + if (iaa_device->iaa_local_wqs->wqs) + kfree(iaa_device->iaa_local_wqs->wqs); + kfree(iaa_device->iaa_local_wqs); + } + kfree(iaa_device); } @@ -716,9 +752,14 @@ static int save_iaa_wq(struct idxd_wq *wq) if (WARN_ON(nr_iaa == 0)) return -EINVAL; - cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; + cpus_per_iaa = (nr_packages * nr_cpus_per_package) / nr_iaa; if (!cpus_per_iaa) cpus_per_iaa = 1; + + nr_iaa_per_package = nr_iaa / nr_packages; + if (!nr_iaa_per_package) + nr_iaa_per_package = 1; + out: return 0; } @@ -735,53 +776,45 @@ static void remove_iaa_wq(struct idxd_wq *wq) } if (nr_iaa) { - cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; + cpus_per_iaa = (nr_packages * nr_cpus_per_package) / nr_iaa; if (!cpus_per_iaa) cpus_per_iaa = 1; - } else + + nr_iaa_per_package = nr_iaa / nr_packages; + if (!nr_iaa_per_package) + nr_iaa_per_package = 1; + } else { cpus_per_iaa = 1; + nr_iaa_per_package = 1; + } } /*************************************************************** * Mapping IAA devices and wqs to cores with per-cpu wq_tables. ***************************************************************/ -static void wq_table_free_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - kfree(entry->wqs); - memset(entry, 0, sizeof(*entry)); -} - -static void wq_table_clear_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - entry->n_wqs = 0; - entry->cur_wq = 0; - memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); -} - -static void clear_wq_table(void) +/* + * Given a cpu, find the closest IAA instance. The idea is to try to + * choose the most appropriate IAA instance for a caller and spread + * available workqueues around to clients. + */ +static inline int cpu_to_iaa(int cpu) { - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_clear_entry(cpu); + int package_id, base_iaa, iaa = 0; - pr_debug("cleared wq table\n"); -} + if (!nr_packages || !nr_iaa_per_package) + return 0; -static void free_wq_table(void) -{ - int cpu; + package_id = topology_logical_package_id(cpu); + base_iaa = package_id * nr_iaa_per_package; + iaa = base_iaa + ((cpu % nr_cpus_per_package) / cpus_per_iaa); - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_free_entry(cpu); + pr_debug("cpu = %d, package_id = %d, base_iaa = %d, iaa = %d", + cpu, package_id, base_iaa, iaa); - free_percpu(wq_table); + if (iaa >= 0 && iaa < nr_iaa) + return iaa; - pr_debug("freed wq table\n"); + return (nr_iaa - 1); } static int alloc_wq_table(int max_wqs) @@ -795,13 +828,11 @@ static int alloc_wq_table(int max_wqs) for (cpu = 0; cpu < nr_cpus; cpu++) { entry = per_cpu_ptr(wq_table, cpu); - entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); - if (!entry->wqs) { - free_wq_table(); - return -ENOMEM; - } + entry->wqs = NULL; entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; } pr_debug("initialized wq table\n"); @@ -809,33 +840,27 @@ static int alloc_wq_table(int max_wqs) return 0; } -static void wq_table_add(int cpu, struct idxd_wq *wq) +static void wq_table_add(int cpu, struct wq_table_entry *iaa_local_wqs) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - if (WARN_ON(entry->n_wqs == entry->max_wqs)) - return; - - entry->wqs[entry->n_wqs++] = wq; + entry->wqs = iaa_local_wqs->wqs; + entry->max_wqs = iaa_local_wqs->max_wqs; + entry->n_wqs = iaa_local_wqs->n_wqs; + entry->cur_wq = 0; - pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, + pr_debug("%s: cpu %d: added %d iaa local wqs up to wq %d.%d\n", __func__, + cpu, entry->n_wqs, entry->wqs[entry->n_wqs - 1]->idxd->id, - entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); + entry->wqs[entry->n_wqs - 1]->id); } static int wq_table_add_wqs(int iaa, int cpu) { struct iaa_device *iaa_device, *found_device = NULL; - int ret = 0, cur_iaa = 0, n_wqs_added = 0; - struct idxd_device *idxd; - struct iaa_wq *iaa_wq; - struct pci_dev *pdev; - struct device *dev; + int ret = 0, cur_iaa = 0; list_for_each_entry(iaa_device, &iaa_devices, list) { - idxd = iaa_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; if (cur_iaa != iaa) { cur_iaa++; @@ -843,7 +868,8 @@ static int wq_table_add_wqs(int iaa, int cpu) } found_device = iaa_device; - dev_dbg(dev, "getting wq from iaa_device %d, cur_iaa %d\n", + dev_dbg(&found_device->idxd->pdev->dev, + "getting wq from iaa_device %d, cur_iaa %d\n", found_device->idxd->id, cur_iaa); break; } @@ -858,29 +884,58 @@ static int wq_table_add_wqs(int iaa, int cpu) } cur_iaa = 0; - idxd = found_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; - dev_dbg(dev, "getting wq from only iaa_device %d, cur_iaa %d\n", + dev_dbg(&found_device->idxd->pdev->dev, + "getting wq from only iaa_device %d, cur_iaa %d\n", found_device->idxd->id, cur_iaa); } - list_for_each_entry(iaa_wq, &found_device->wqs, list) { - wq_table_add(cpu, iaa_wq->wq); - pr_debug("rebalance: added wq for cpu=%d: iaa wq %d.%d\n", - cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id); - n_wqs_added++; + wq_table_add(cpu, found_device->iaa_local_wqs); + +out: + return ret; +} + +static int map_iaa_device_wqs(struct iaa_device *iaa_device) +{ + struct wq_table_entry *local; + int ret = 0, n_wqs_added = 0; + struct iaa_wq *iaa_wq; + + local = iaa_device->iaa_local_wqs; + + list_for_each_entry(iaa_wq, &iaa_device->wqs, list) { + if (iaa_wq->mapped && ++n_wqs_added) + continue; + + pr_debug("iaa_device %px: processing wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + + if (WARN_ON(local->n_wqs == local->max_wqs)) + break; + + local->wqs[local->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + + iaa_wq->mapped = true; + ++n_wqs_added; } - if (!n_wqs_added) { - pr_debug("couldn't find any iaa wqs!\n"); + if (!n_wqs_added && !iaa_device->n_wq) { + pr_debug("iaa_device %d: couldn't find any iaa wqs!\n", iaa_device->idxd->id); ret = -EINVAL; - goto out; } -out: + return ret; } +static void map_iaa_devices(void) +{ + struct iaa_device *iaa_device; + + list_for_each_entry(iaa_device, &iaa_devices, list) { + BUG_ON(map_iaa_device_wqs(iaa_device)); + } +} + /* * Rebalance the wq table so that given a cpu, it's easy to find the * closest IAA instance. The idea is to try to choose the most @@ -889,48 +944,42 @@ static int wq_table_add_wqs(int iaa, int cpu) */ static void rebalance_wq_table(void) { - const struct cpumask *node_cpus; - int node, cpu, iaa = -1; + int cpu, iaa; if (nr_iaa == 0) return; - pr_debug("rebalance: nr_nodes=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n", - nr_nodes, nr_cpus, nr_iaa, cpus_per_iaa); + map_iaa_devices(); - clear_wq_table(); + pr_debug("rebalance: nr_packages=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n", + nr_packages, nr_cpus, nr_iaa, cpus_per_iaa); - if (nr_iaa == 1) { - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (WARN_ON(wq_table_add_wqs(0, cpu))) { - pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu); - return; - } + for (cpu = 0; cpu < nr_cpus; cpu++) { + iaa = cpu_to_iaa(cpu); + pr_debug("rebalance: cpu=%d iaa=%d\n", cpu, iaa); + + if (WARN_ON(iaa == -1)) { + pr_debug("rebalance (cpu_to_iaa(%d)) failed!\n", cpu); + return; } - return; + if (WARN_ON(wq_table_add_wqs(iaa, cpu))) { + pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); + return; + } } - for_each_node_with_cpus(node) { - node_cpus = cpumask_of_node(node); - - for (cpu = 0; cpu < cpumask_weight(node_cpus); cpu++) { - int node_cpu = cpumask_nth(cpu, node_cpus); - - if (WARN_ON(node_cpu >= nr_cpu_ids)) { - pr_debug("node_cpu %d doesn't exist!\n", node_cpu); - return; - } - - if ((cpu % cpus_per_iaa) == 0) - iaa++; + pr_debug("Finished rebalance local wqs."); +} - if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) { - pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); - return; - } - } +static void free_wq_tables(void) +{ + if (wq_table) { + free_percpu(wq_table); + wq_table = NULL; } + + pr_debug("freed local wq table\n"); } /*************************************************************** @@ -2134,7 +2183,7 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev) free_iaa_wq(idxd_wq_get_private(wq)); err_save: if (first_wq) - free_wq_table(); + free_wq_tables(); err_alloc: mutex_unlock(&iaa_devices_lock); idxd_drv_disable_wq(wq); @@ -2184,7 +2233,9 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev) if (nr_iaa == 0) { iaa_crypto_enabled = false; - free_wq_table(); + free_wq_tables(); + BUG_ON(!list_empty(&iaa_devices)); + INIT_LIST_HEAD(&iaa_devices); module_put(THIS_MODULE); pr_info("iaa_crypto now DISABLED\n"); @@ -2210,16 +2261,11 @@ static struct idxd_device_driver iaa_crypto_driver = { static int __init iaa_crypto_init_module(void) { int ret = 0; - int node; + INIT_LIST_HEAD(&iaa_devices); nr_cpus = num_possible_cpus(); - for_each_node_with_cpus(node) - nr_nodes++; - if (!nr_nodes) { - pr_err("IAA couldn't find any nodes with cpus\n"); - return -ENODEV; - } - nr_cpus_per_node = nr_cpus / nr_nodes; + nr_cpus_per_package = topology_num_cores_per_package(); + nr_packages = topology_max_packages(); if (crypto_has_comp("deflate-generic", 0, 0)) deflate_generic_tfm = crypto_alloc_comp("deflate-generic", 0, 0); From patchwork Mon Mar 3 08:47:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869911 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC1FC1F1531; Mon, 3 Mar 2025 08:47:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991661; cv=none; b=qL6pu2bAjcKmHpsr7nzxOBbmqandgi2sRevvYHAjALxTDH7dIIQ9RCsrHvFFrqbsSuuMNzKYKKBUNund2mRgqWvXopz1rugdT0uJmWBZ2Qkm/mlqVpi+75oE4A0nIhSPhsJFQAoiadQbRzd0hIAwMPU1LOKuwbaEHRZ3H2eqrYg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991661; c=relaxed/simple; bh=xzJxxiFYwAjT5Xrqds5C/407QeaiWfMMqcMGCN4vRiA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rAtbr3jmNuoisxSEPZfwUhyv732Z+JQTl/yyqUwvrcba8Y6dUO5KfcoddWbqDx+NnU3eq+ChYmPq9aiLLRCkxzHwIbDkF5sFqY7dqODlGLIySF0fO8K3m94dhBwQyNionG479StzzsEJqtccJkM21s3t+7Zg5JkX49UKBaHTSuM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XM2WAiiB; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XM2WAiiB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991659; x=1772527659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xzJxxiFYwAjT5Xrqds5C/407QeaiWfMMqcMGCN4vRiA=; b=XM2WAiiB9ZNmCiKSz2/GrjEAQ+qggXHvE+DEdDeXcNWtp0Xkucxh/g0j fA37Dc58wgwGkceUaVdOls4CdAybtjGy9q8Lcu8CKxX7389ERbpZ+mNPh KqPRNxdRGQ7CMqohrQ3QUqOz208C/0yqSzt9L9TFTtMeQ6JMM8oc+UBzr GbhekXhfGKCY2XPJnrJtzQsphKA356MxvaNOwPMRq0FPjjEcrIMjyVrjt S8nwPRxpEDU/FBy2mnzmC7v1fv6eNZzVLVsAKcECOxUS1j/gqmcdF3Jyn 7LTrfrKmeN8jXyTrJ0kh7MDzEpkySdNPeK7iD49s4TyLNUmOaccdMUj6H g==; X-CSE-ConnectionGUID: rxc4tLBuSp+9ZENU2vO1Yw== X-CSE-MsgGUID: 5CrcPR1LRRK9IXkiZGj9Xw== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111977" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111977" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:36 -0800 X-CSE-ConnectionGUID: dO2fnQJ4QGuFYHAJhUYFvA== X-CSE-MsgGUID: I08r1il/SqebEfhy5Kjt9Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426818" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:35 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 10/14] crypto: iaa - Descriptor allocation timeouts with mitigations in iaa_crypto. Date: Mon, 3 Mar 2025 00:47:20 -0800 Message-Id: <20250303084724.6490-11-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch modifies the descriptor allocation from blocking to non-blocking with bounded retries or "timeouts". This is necessary to prevent task blocked errors in high contention scenarios, for instance, when the platform has only 1 IAA device enabled. With 1 IAA device enabled per package on a dual-package SPR with 56 cores/package, there are 112 logical cores mapped to this single IAA device. In this scenario, the task blocked errors can occur because idxd_alloc_desc() is called with IDXD_OP_BLOCK. Any process that is able to obtain IAA_CRYPTO_MAX_BATCH_SIZE (8U) descriptors, will cause contention for allocating descriptors for all other processes. Under IDXD_OP_BLOCK, this can cause compress/decompress jobs to stall in stress test scenarios (e.g. zswap_store() of 2M folios). In order to make the iaa_crypto driver be more fail-safe, this commit implements the following: 1) Change compress/decompress descriptor allocations to be non-blocking with retries ("timeouts"). 2) Return compress error to zswap if descriptor allocation with timeouts fails during compress ops. zswap_store() will return an error and the folio gets stored in the backing swap device. 3) Fallback to software decompress if descriptor allocation with timeouts fails during decompress ops. 4) Bug fixes for freeing the descriptor consistently in all error cases. With these fixes, there are no task blocked errors seen under stress testing conditions, and no performance degradation observed. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 3 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 74 ++++++++++++---------- 2 files changed, 45 insertions(+), 32 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 5f38f530c33d..de14e5e2a017 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -21,6 +21,9 @@ #define IAA_COMPLETION_TIMEOUT 1000000 +#define IAA_ALLOC_DESC_COMP_TIMEOUT 1000 +#define IAA_ALLOC_DESC_DECOMP_TIMEOUT 500 + #define IAA_ANALYTICS_ERROR 0x0a #define IAA_ERROR_DECOMP_BUF_OVERFLOW 0x0b #define IAA_ERROR_COMP_BUF_OVERFLOW 0x19 diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index cb96897e7fed..7503fafca279 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1406,6 +1406,7 @@ static int deflate_generic_decompress(struct acomp_req *req) void *src, *dst; int ret; + req->dlen = PAGE_SIZE; src = kmap_local_page(sg_page(req->src)) + req->src->offset; dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset; @@ -1469,7 +1470,8 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; + struct idxd_desc *idxd_desc = ERR_PTR(-EAGAIN); + int alloc_desc_retries = 0; struct iax_hw_desc *desc; struct idxd_device *idxd; struct iaa_wq *iaa_wq; @@ -1485,7 +1487,11 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + while ((idxd_desc == ERR_PTR(-EAGAIN)) && (alloc_desc_retries++ < IAA_ALLOC_DESC_DECOMP_TIMEOUT)) { + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_NONBLOCK); + cpu_relax(); + } + if (IS_ERR(idxd_desc)) { dev_dbg(dev, "idxd descriptor allocation failed\n"); dev_dbg(dev, "iaa compress failed: ret=%ld\n", @@ -1661,7 +1667,8 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; + struct idxd_desc *idxd_desc = ERR_PTR(-EAGAIN); + int alloc_desc_retries = 0; struct iax_hw_desc *desc; struct idxd_device *idxd; struct iaa_wq *iaa_wq; @@ -1677,7 +1684,11 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + while ((idxd_desc == ERR_PTR(-EAGAIN)) && (alloc_desc_retries++ < IAA_ALLOC_DESC_COMP_TIMEOUT)) { + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_NONBLOCK); + cpu_relax(); + } + if (IS_ERR(idxd_desc)) { dev_dbg(dev, "idxd descriptor allocation failed\n"); dev_dbg(dev, "iaa compress failed: ret=%ld\n", PTR_ERR(idxd_desc)); @@ -1753,15 +1764,10 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, *compression_crc = idxd_desc->iax_completion->crc; - if (!ctx->async_mode || disable_async) - idxd_free_desc(wq, idxd_desc); -out: - return ret; err: idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); - - goto out; +out: + return ret; } static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, @@ -1773,7 +1779,8 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; + struct idxd_desc *idxd_desc = ERR_PTR(-EAGAIN); + int alloc_desc_retries = 0; struct iax_hw_desc *desc; struct idxd_device *idxd; struct iaa_wq *iaa_wq; @@ -1789,12 +1796,18 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + while ((idxd_desc == ERR_PTR(-EAGAIN)) && (alloc_desc_retries++ < IAA_ALLOC_DESC_DECOMP_TIMEOUT)) { + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_NONBLOCK); + cpu_relax(); + } + if (IS_ERR(idxd_desc)) { dev_dbg(dev, "idxd descriptor allocation failed\n"); dev_dbg(dev, "iaa decompress failed: ret=%ld\n", PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); + ret = PTR_ERR(idxd_desc); + idxd_desc = NULL; + goto fallback_software_decomp; } desc = idxd_desc->iax_hw; @@ -1837,7 +1850,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, ret = idxd_submit_desc(wq, idxd_desc); if (ret) { dev_dbg(dev, "submit_desc failed ret=%d\n", ret); - goto err; + goto fallback_software_decomp; } /* Update stats */ @@ -1851,19 +1864,20 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, } ret = check_completion(dev, idxd_desc->iax_completion, false, false); + +fallback_software_decomp: if (ret) { - dev_dbg(dev, "%s: check_completion failed ret=%d\n", __func__, ret); - if (idxd_desc->iax_completion->status == IAA_ANALYTICS_ERROR) { + dev_dbg(dev, "%s: desc allocation/submission/check_completion failed ret=%d\n", __func__, ret); + if (idxd_desc && idxd_desc->iax_completion->status == IAA_ANALYTICS_ERROR) { pr_warn("%s: falling back to deflate-generic decompress, " "analytics error code %x\n", __func__, idxd_desc->iax_completion->error_code); - ret = deflate_generic_decompress(req); - if (ret) { - dev_dbg(dev, "%s: deflate-generic failed ret=%d\n", - __func__, ret); - goto err; - } - } else { + } + + ret = deflate_generic_decompress(req); + + if (ret) { + pr_err("%s: iaa decompress failed: fallback to deflate-generic software decompress error ret=%d\n", __func__, ret); goto err; } } else { @@ -1872,19 +1886,15 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, *dlen = req->dlen; - if (!ctx->async_mode || disable_async) - idxd_free_desc(wq, idxd_desc); - /* Update stats */ update_total_decomp_bytes_in(slen); update_wq_decomp_bytes(wq, slen); + +err: + if (idxd_desc) + idxd_free_desc(wq, idxd_desc); out: return ret; -err: - idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa decompress failed: ret=%d\n", ret); - - goto out; } static int iaa_comp_acompress(struct acomp_req *req) From patchwork Mon Mar 3 08:47:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869910 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68D6D1F237A; Mon, 3 Mar 2025 08:47:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991662; cv=none; b=hOUOxfYL6a6J1YhCui4S66mnW9BMlsSyHTGsr68YQYcdVNSaP2FpSKDVt3tBWbg+XcqChfDFB5GelNxRXc6KE4PixdlXXIMo9zfvCyUx+kxvATfgIVMKe4PFvwtXRySIGWikkVpNUHDdGVVVKdzt8FoH2oxp6kleK/jvD0nyqNE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991662; c=relaxed/simple; bh=nu1yNk8S8EgSrO+ZHhQ4Hl0iMXbUsnYxF38CaLmoAAI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OgTGFxuMEZWaM5SevubScAIvw4nG369C/CGnuZGO0k4fVtUR01/PJn7xJquauIiitxGaWhMFs1IWr2bytXbMxTvQ68Mskrlv1ttPqtVLyN1mt8aaEkHggmOsWEXcJrt34DZmDHZVhr/44KMiJrWvxpp/986FlHpS4u+iLP2oa2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Xmh2yriG; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Xmh2yriG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991661; x=1772527661; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nu1yNk8S8EgSrO+ZHhQ4Hl0iMXbUsnYxF38CaLmoAAI=; b=Xmh2yriGF7OjL/FJ2HoQZA3h45RXDKYzh1gfHn1lE26hllCvmEEf+nph oCYSmkl2VAZU1X9/D9U+4JTM9Bt7dMwxN/QBnEQ9H+hOqfvCXi11s8qiv B9GHn1d7kNnBn0M23imNcbGcjvm2w0lr3L11OfDfRf7ZGdL1yBRlSzu6Z NWowBC74Q5+nyzhXAac6Vsnc1h1vgliWZA/Ft81kNo9UYk4dIBf8UQuWz 5fdbu0Cpc4XqPnbBjneT/FYitPxg8KluYbk8tvg5mbt5+oKIUpXswxIki z6MgVi+LqDoalg8jJcgWpyw3xY+ZR6LG7f0RG6jYBQyIcBWuQ89NgAJT8 g==; X-CSE-ConnectionGUID: Ln02oyqhQBSt7iOCzC1ARg== X-CSE-MsgGUID: IbQV5mGvSaqHL/dNgajwmg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111991" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111991" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:39 -0800 X-CSE-ConnectionGUID: RcBRVQJHThmfzPKoFFz43Q== X-CSE-MsgGUID: Mf9Syfb0RG6aEdF/ZVQZAQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426822" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:36 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 11/14] crypto: iaa - Fix for "deflate_generic_tfm" global being accessed without locks. Date: Mon, 3 Mar 2025 00:47:21 -0800 Message-Id: <20250303084724.6490-12-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The mainline implementation of "deflate_generic_decompress" has a bug in the usage of this global variable: static struct crypto_comp *deflate_generic_tfm; The "deflate_generic_tfm" is allocated at module init time, and freed during module cleanup. Any calls to software decompress, for instance, if descriptor allocation fails or job submission fails, will trigger this bug in the deflate_generic_decompress() procedure. The problem is the unprotected access of "deflate_generic_tfm" in this procedure. While stress testing workloads under high memory pressure, with 1 IAA device and "deflate-iaa" as the compressor, the descriptor allocation times out and the software fallback route is taken. With multiple processes calling: ret = crypto_comp_decompress(deflate_generic_tfm, src, req->slen, dst, &req->dlen); we end up with data corruption, that results in req->dlen being larger than PAGE_SIZE. zswap_decompress() subsequently raises a kernel bug. This bug can manifest under high contention and memory pressure situations with high likelihood. This has been resolved by adding a mutex, which is locked before accessing "deflate_generic_tfm" and unlocked after the crypto_comp call is done. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 7503fafca279..2a994f307679 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -105,6 +105,7 @@ static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; LIST_HEAD(iaa_devices); DEFINE_MUTEX(iaa_devices_lock); +DEFINE_MUTEX(deflate_generic_tfm_lock); /* If enabled, IAA hw crypto algos are registered, unavailable otherwise */ static bool iaa_crypto_enabled; @@ -1407,6 +1408,9 @@ static int deflate_generic_decompress(struct acomp_req *req) int ret; req->dlen = PAGE_SIZE; + + mutex_lock(&deflate_generic_tfm_lock); + src = kmap_local_page(sg_page(req->src)) + req->src->offset; dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset; @@ -1416,6 +1420,8 @@ static int deflate_generic_decompress(struct acomp_req *req) kunmap_local(src); kunmap_local(dst); + mutex_unlock(&deflate_generic_tfm_lock); + update_total_sw_decomp_calls(); return ret; From patchwork Mon Mar 3 08:47:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 869909 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A58431F130C; Mon, 3 Mar 2025 08:47:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991666; cv=none; b=TG0jEB7PH6NjUE6TBeM4kTTYPXGBmCmkLbrks8f76pymuy+1+sU8goPgVUBKZfkOFATN2aITUSTbHRka57mKZpziUv4bsrVX+kv84R3/YoSSu2LLk3DSZGC4y6tdtTfs7CFxHYOuHj3c+OU9QJmeA24IJlPJOmqef1ibvV+nnxM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740991666; c=relaxed/simple; bh=mzM5ALTaJ0aweiEptVLygcT+eKuMHDddBO1cMIOmY5c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nDi5o6rLMuDc4m0fns7tE2zE03l/Lcm23YaltNxVe6jVqe2QELuDNjGmfLzsp1B9rR3dJEnxtt1cbt3rbZjmCXv03KgUPvDARBBOWWFKa2uDcD8cBrhyQl8UH4LRohZr5iEMX3TvxzV4Y4obOejhkMOHaqzfPajW4NoeaT0lnyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=m1B/AgcL; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="m1B/AgcL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991665; x=1772527665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mzM5ALTaJ0aweiEptVLygcT+eKuMHDddBO1cMIOmY5c=; b=m1B/AgcLtUhvYbpJE1+UVl1vgFgConOvdEPX1imXuGUpeLb6kjCDhOFK 5qoSNwf60Ju1coNTezJl4ixH0KdGfvz7bZtdcVurzfPNd51VuP1IYULsa L2/SsVyvE/F2WJFGtmzHLzTZNGU4cU7jVSmHsTOOKApIK7FHQu9iNhqmw mFmvWnaCT5IfoaVGG55KxInTrrR4rS0U8GdaBpiMv6VcROKONPJJQtAiZ A5uu5DifFBPffwpVYLKKKTft2b298y6srS8HO1eXgi22mK0NRHVf/DdmC hRVblSDY6yMhIAN1mZ5UeuA39EPU7YfQryuHMg3QFCLg/w+BMv6KbochF A==; X-CSE-ConnectionGUID: WwZyvSgxT66VPjEjUhmlVg== X-CSE-MsgGUID: DZLNIhyPSXul57FF91Ppxw== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42112018" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42112018" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:40 -0800 X-CSE-ConnectionGUID: ehvg2B31TRqs9zpVgL56/w== X-CSE-MsgGUID: pKsO428nSeatt0KiCUhTdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426829" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:38 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 13/14] mm: zswap: Allocate pool batching resources if the compressor supports batching. Date: Mon, 3 Mar 2025 00:47:23 -0800 Message-Id: <20250303084724.6490-14-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch adds support for the per-CPU acomp_ctx to track multiple compression/decompression requests and multiple compression destination buffers. The zswap_cpu_comp_prepare() CPU onlining code will get the maximum batch-size the compressor supports. If so, it will allocate the necessary batching resources. However, zswap does not use more than one request yet. Follow-up patches will actually utilize the multiple acomp_ctx requests/buffers for batch compression/decompression of multiple pages. The newly added ZSWAP_MAX_BATCH_SIZE limits the amount of extra memory used for batching. There is a small extra memory overhead of allocating the "reqs" and "buffers" arrays for compressors that do not support batching. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 99 +++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 69 insertions(+), 30 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index cff96df1df8b..fae59d6d5147 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -78,6 +78,16 @@ static bool zswap_pool_reached_full; #define ZSWAP_PARAM_UNSET "" +/* + * For compression batching of large folios: + * Maximum number of acomp compress requests that will be processed + * in a batch, iff the zswap compressor supports batching. + * This limit exists because we preallocate enough requests and buffers + * in the per-cpu acomp_ctx accordingly. Hence, a higher limit means higher + * memory usage. + */ +#define ZSWAP_MAX_BATCH_SIZE 8U + static int zswap_setup(void); /* Enable/disable zswap */ @@ -143,8 +153,8 @@ bool zswap_never_enabled(void) struct crypto_acomp_ctx { struct crypto_acomp *acomp; - struct acomp_req *req; - u8 *buffer; + struct acomp_req **reqs; + u8 **buffers; u8 nr_reqs; struct crypto_wait wait; struct mutex mutex; @@ -251,13 +261,22 @@ static void __zswap_pool_empty(struct percpu_ref *ref); static void acomp_ctx_dealloc(struct crypto_acomp_ctx *acomp_ctx) { if (!IS_ERR_OR_NULL(acomp_ctx) && acomp_ctx->nr_reqs) { + u8 i; + + if (acomp_ctx->reqs) { + for (i = 0; i < acomp_ctx->nr_reqs; ++i) + if (!IS_ERR_OR_NULL(acomp_ctx->reqs[i])) + acomp_request_free(acomp_ctx->reqs[i]); + kfree(acomp_ctx->reqs); + acomp_ctx->reqs = NULL; + } - if (!IS_ERR_OR_NULL(acomp_ctx->req)) - acomp_request_free(acomp_ctx->req); - acomp_ctx->req = NULL; - - kfree(acomp_ctx->buffer); - acomp_ctx->buffer = NULL; + if (acomp_ctx->buffers) { + for (i = 0; i < acomp_ctx->nr_reqs; ++i) + kfree(acomp_ctx->buffers[i]); + kfree(acomp_ctx->buffers); + acomp_ctx->buffers = NULL; + } if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) crypto_free_acomp(acomp_ctx->acomp); @@ -271,6 +290,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); int ret = -ENOMEM; + u8 i; /* * Just to be even more fail-safe against changes in assumptions and/or @@ -292,22 +312,41 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) goto fail; } - acomp_ctx->nr_reqs = 1; + acomp_ctx->nr_reqs = min(ZSWAP_MAX_BATCH_SIZE, + crypto_acomp_batch_size(acomp_ctx->acomp)); - acomp_ctx->req = acomp_request_alloc(acomp_ctx->acomp); - if (!acomp_ctx->req) { - pr_err("could not alloc crypto acomp_request %s\n", - pool->tfm_name); - ret = -ENOMEM; + acomp_ctx->reqs = kcalloc_node(acomp_ctx->nr_reqs, sizeof(struct acomp_req *), + GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->reqs) goto fail; + + for (i = 0; i < acomp_ctx->nr_reqs; ++i) { + acomp_ctx->reqs[i] = acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->reqs[i]) { + pr_err("could not alloc crypto acomp_request reqs[%d] %s\n", + i, pool->tfm_name); + goto fail; + } } - acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); - if (!acomp_ctx->buffer) { - ret = -ENOMEM; + acomp_ctx->buffers = kcalloc_node(acomp_ctx->nr_reqs, sizeof(u8 *), + GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->buffers) goto fail; + + for (i = 0; i < acomp_ctx->nr_reqs; ++i) { + acomp_ctx->buffers[i] = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, + cpu_to_node(cpu)); + if (!acomp_ctx->buffers[i]) + goto fail; } + /* + * The crypto_wait is used only in fully synchronous, i.e., with scomp + * or non-poll mode of acomp, hence there is only one "wait" per + * acomp_ctx, with callback set to reqs[0], under the assumption that + * there is at least 1 request per acomp_ctx. + */ crypto_init_wait(&acomp_ctx->wait); /* @@ -315,7 +354,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) * crypto_wait_req(); if the backend of acomp is scomp, the callback * won't be called, crypto_wait_req() will return without blocking. */ - acomp_request_set_callback(acomp_ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, + acomp_request_set_callback(acomp_ctx->reqs[0], CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &acomp_ctx->wait); acomp_ctx->is_sleepable = acomp_is_async(acomp_ctx->acomp); @@ -407,8 +446,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); acomp_ctx->acomp = NULL; - acomp_ctx->req = NULL; - acomp_ctx->buffer = NULL; + acomp_ctx->reqs = NULL; + acomp_ctx->buffers = NULL; acomp_ctx->__online = false; acomp_ctx->nr_reqs = 0; mutex_init(&acomp_ctx->mutex); @@ -1026,7 +1065,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, u8 *dst; acomp_ctx = acomp_ctx_get_cpu_lock(pool); - dst = acomp_ctx->buffer; + dst = acomp_ctx->buffers[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -1036,7 +1075,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * giving the dst buffer with enough length to avoid buffer overflow. */ sg_init_one(&output, dst, PAGE_SIZE * 2); - acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen); + acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, PAGE_SIZE, dlen); /* * it maybe looks a little bit silly that we send an asynchronous request, @@ -1050,8 +1089,8 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * but in different threads running on different cpu, we have different * acomp instance, so multiple threads can do (de)compression in parallel. */ - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); - dlen = acomp_ctx->req->dlen; + comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); + dlen = acomp_ctx->reqs[0]->dlen; if (comp_ret) goto unlock; @@ -1102,19 +1141,19 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) */ if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) || !virt_addr_valid(src)) { - memcpy(acomp_ctx->buffer, src, entry->length); - src = acomp_ctx->buffer; + memcpy(acomp_ctx->buffers[0], src, entry->length); + src = acomp_ctx->buffers[0]; zpool_unmap_handle(zpool, entry->handle); } sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_folio(&output, folio, PAGE_SIZE, 0); - acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); - BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); - BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); + acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, entry->length, PAGE_SIZE); + BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->reqs[0]), &acomp_ctx->wait)); + BUG_ON(acomp_ctx->reqs[0]->dlen != PAGE_SIZE); - if (src != acomp_ctx->buffer) + if (src != acomp_ctx->buffers[0]) zpool_unmap_handle(zpool, entry->handle); acomp_ctx_put_unlock(acomp_ctx); }