From patchwork Wed Oct 19 08:37:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 616506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE477C4332F for ; Wed, 19 Oct 2022 08:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229489AbiJSIh0 (ORCPT ); Wed, 19 Oct 2022 04:37:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbiJSIhY (ORCPT ); Wed, 19 Oct 2022 04:37:24 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD9DA5AC4E; Wed, 19 Oct 2022 01:37:22 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 9552D2036F; Wed, 19 Oct 2022 08:37:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1666168640; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lE4h/gGDAk0yBt10s+3hx9RfdJ8m3r4PL0TleCCgZbQ=; b=XtfXblThih+H2z3lXpWveY5SYLNoqVOf1AWNmNq5EaoqAZoO6IwlMk+4jBXy7DT9i0vH4F mhaksZClG2j+TwYoS3FPm3nLcrbqV21Msx5JV6xgunkt2dDo+mJo1TP9R4oxXnBq576B7X UoR7Yy+I6Q8mo/krl7I5hrQH48MVBeA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1666168640; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lE4h/gGDAk0yBt10s+3hx9RfdJ8m3r4PL0TleCCgZbQ=; b=yyavfjPgl7g9/9+NRoB0ar2h6C9zBUX9P4LhQwxE8ooKVWbJzugc7hkWGWM9dBYzp0xwfH jr58J9t/Dc6Z0TAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8514613345; Wed, 19 Oct 2022 08:37:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id lJtYIEC3T2NFZQAAMHmgww (envelope-from ); Wed, 19 Oct 2022 08:37:20 +0000 From: Nicolai Stange To: Steffen Klassert , Daniel Jordan Cc: Herbert Xu , Martin Doucha , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Nicolai Stange Subject: [PATCH 1/5] padata: introduce internal padata_get/put_pd() helpers Date: Wed, 19 Oct 2022 10:37:04 +0200 Message-Id: <20221019083708.27138-2-nstange@suse.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221019083708.27138-1-nstange@suse.de> References: <20221019083708.27138-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The next commit in this series will add yet another code site decrementing a struct parallel_data's refcount and invoking dealloction as appropriate. With that, it's due to provide proper helper functions for managing parallel_datas' refcounts and to convert the existing open-coded refcount manipulation sites. Implement padata_put_pd() as well as a padata_put_pd_many() for batched releases as needed in padata_serial_worker(). For symmetry reasons, also provide a padata_put_get(), even though its implementation is fairly trivial. Convert the exisiting open-coded parallel_data ->refcnt manipulation code sites to these new helpers. Signed-off-by: Nicolai Stange Acked-by: Daniel Jordan --- kernel/padata.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/kernel/padata.c b/kernel/padata.c index e5819bb8bd1d..3bd1e23f089b 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -45,6 +45,10 @@ struct padata_mt_job_state { }; static void padata_free_pd(struct parallel_data *pd); +static inline void padata_get_pd(struct parallel_data *pd); +static void padata_put_pd_many(struct parallel_data *pd, int cnt); +static inline void padata_put_pd(struct parallel_data *pd); + static void __init padata_mt_helper(struct work_struct *work); static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index) @@ -198,7 +202,7 @@ int padata_do_parallel(struct padata_shell *ps, if ((pinst->flags & PADATA_RESET)) goto out; - refcount_inc(&pd->refcnt); + padata_get_pd(pd); padata->pd = pd; padata->cb_cpu = *cb_cpu; @@ -370,8 +374,7 @@ static void padata_serial_worker(struct work_struct *serial_work) } local_bh_enable(); - if (refcount_sub_and_test(cnt, &pd->refcnt)) - padata_free_pd(pd); + padata_put_pd_many(pd, cnt); } /** @@ -608,6 +611,22 @@ static void padata_free_pd(struct parallel_data *pd) kfree(pd); } +static inline void padata_get_pd(struct parallel_data *pd) +{ + refcount_inc(&pd->refcnt); +} + +static void padata_put_pd_many(struct parallel_data *pd, int cnt) +{ + if (refcount_sub_and_test(cnt, &pd->refcnt)) + padata_free_pd(pd); +} + +static inline void padata_put_pd(struct parallel_data *pd) +{ + padata_put_pd_many(pd, 1); +} + static void __padata_start(struct padata_instance *pinst) { pinst->flags |= PADATA_INIT; @@ -654,8 +673,7 @@ static int padata_replace(struct padata_instance *pinst) synchronize_rcu(); list_for_each_entry_continue_reverse(ps, &pinst->pslist, list) - if (refcount_dec_and_test(&ps->opd->refcnt)) - padata_free_pd(ps->opd); + padata_put_pd(ps->opd); pinst->flags &= ~PADATA_RESET; From patchwork Wed Oct 19 08:37:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 616872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 542B8C433FE for ; Wed, 19 Oct 2022 08:37:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230036AbiJSIh2 (ORCPT ); Wed, 19 Oct 2022 04:37:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230073AbiJSIh0 (ORCPT ); Wed, 19 Oct 2022 04:37:26 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A5274F677; Wed, 19 Oct 2022 01:37:24 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 131EF339ED; Wed, 19 Oct 2022 08:37:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1666168642; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D7VyTjcX7VW48TVL80NmZysRhyGrdRwOMNYwKQUMavo=; b=edqx8wbz4Mm+r4zK7xBid02cl0Hen8mMeleU9d9POi505+eX92W7XmE4mbJgKSjnux9OrX HR0EtpTnR6+nINT+Qn75I5xYJIG1kMX2dUK4ki6v3695x5G2bDM9W8FeHutx/GCeg7enWV HDBk4lpFsOm0lqEFWM8mD4qWqD+NibI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1666168642; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D7VyTjcX7VW48TVL80NmZysRhyGrdRwOMNYwKQUMavo=; b=dZPyjF1y8YbtDITpU29vZ91BAv8tTFw6T4i8RK/1levpSBAkUq5I+PahlYVT0y72721d2d GSIjW/09iXZBXYAQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0354F13345; Wed, 19 Oct 2022 08:37:22 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id RHk4AEK3T2NLZQAAMHmgww (envelope-from ); Wed, 19 Oct 2022 08:37:22 +0000 From: Nicolai Stange To: Steffen Klassert , Daniel Jordan Cc: Herbert Xu , Martin Doucha , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Nicolai Stange Subject: [PATCH 2/5] padata: make padata_free_shell() to respect pd's ->refcnt Date: Wed, 19 Oct 2022 10:37:05 +0200 Message-Id: <20221019083708.27138-3-nstange@suse.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221019083708.27138-1-nstange@suse.de> References: <20221019083708.27138-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On a PREEMPT kernel, the following has been observed while running pcrypt_aead01 from LTP: [ ] general protection fault: 0000 [#1] PREEMPT_RT SMP PTI <...> [ ] Workqueue: pdecrypt_parallel padata_parallel_worker [ ] RIP: 0010:padata_reorder+0x19/0x120 <...> [ ] Call Trace: [ ] padata_parallel_worker+0xa3/0xf0 [ ] process_one_work+0x1db/0x4a0 [ ] worker_thread+0x2d/0x3c0 [ ] ? process_one_work+0x4a0/0x4a0 [ ] kthread+0x159/0x180 [ ] ? kthread_park+0xb0/0xb0 [ ] ret_from_fork+0x35/0x40 The pcrypt_aead01 testcase basically runs a NEWALG/DELALG sequence for some fixed pcrypt instance in a loop, back to back. The problem is that once the last ->serial() in padata_serial_worker() is getting invoked, the pcrypt requests from the selftests would signal completion, and pcrypt_aead01 can move on and subsequently issue a DELALG. Upon pcrypt instance deregistration, the associated padata_shell would get destroyed, which in turn would unconditionally free the associated parallel_data instance. If padata_serial_worker() now resumes operation after e.g. having previously been preempted upon the return from the last of those ->serial() callbacks, its subsequent accesses to pd for managing the ->refcnt will all be UAFs. In particular, if the memory backing pd has meanwhile been reused for some new parallel_data allocation, e.g in the course of processing another subsequent NEWALG request, the padata_serial_worker() might find the initial ->refcnt of one and free pd from under that NEWALG or the associated selftests respectively, leading to "secondary" UAFs such as in the Oops above. Note that as it currently stands, a padata_shell owns a reference on its associated parallel_data already. So fix the UAF in padata_serial_worker() by making padata_free_shell() to not unconditionally free the shell's associated parallel_data, but to properly drop that reference via padata_put_pd() instead. Fixes: 07928d9bfc81 ("padata: Remove broken queue flushing") Signed-off-by: Nicolai Stange Acked-by: Daniel Jordan --- kernel/padata.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/padata.c b/kernel/padata.c index 3bd1e23f089b..0bf8c80dad5a 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -1112,7 +1112,7 @@ void padata_free_shell(struct padata_shell *ps) mutex_lock(&ps->pinst->lock); list_del(&ps->list); - padata_free_pd(rcu_dereference_protected(ps->pd, 1)); + padata_put_pd(rcu_dereference_protected(ps->pd, 1)); mutex_unlock(&ps->pinst->lock); kfree(ps); From patchwork Wed Oct 19 08:37:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 616505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E49DDC4332F for ; Wed, 19 Oct 2022 08:37:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230144AbiJSIhe (ORCPT ); Wed, 19 Oct 2022 04:37:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230071AbiJSIh3 (ORCPT ); Wed, 19 Oct 2022 04:37:29 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3C077D7BB; Wed, 19 Oct 2022 01:37:26 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 5F99F20382; Wed, 19 Oct 2022 08:37:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1666168645; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nDl7gQ8N7tn2RB1Gh66JslmBkmAIPalxnvJSrXhYvqQ=; b=pgqxuoXE0xR8Djlfp+UWAPSVNOMcy8HcaYtM8psOj4cHlq7SEG9CFhupuWyAQ1AiXcXKq9 Cih0uYHHzco9Ksw++jgPUG9a8zTD/Kq/QQlh6fZJKGqXKOCbCLkHsAG8CN163f+NNJQCHq e1YD0AW1OJUan0dCOj8wBcFvR0YaneU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1666168645; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nDl7gQ8N7tn2RB1Gh66JslmBkmAIPalxnvJSrXhYvqQ=; b=xDK9iTx0RlIKR7s5AblRCrV1tEdBD4Z89I2aHzDYTpLHjuz2wLAP1vfMEFQbQUmo7K1CLi gVyC5su9vtvv7sBg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4F83C13345; Wed, 19 Oct 2022 08:37:25 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 7b/7EkW3T2NVZQAAMHmgww (envelope-from ); Wed, 19 Oct 2022 08:37:25 +0000 From: Nicolai Stange To: Steffen Klassert , Daniel Jordan Cc: Herbert Xu , Martin Doucha , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Nicolai Stange Subject: [PATCH 3/5] padata: grab parallel_data refcnt for reorder Date: Wed, 19 Oct 2022 10:37:06 +0200 Message-Id: <20221019083708.27138-4-nstange@suse.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221019083708.27138-1-nstange@suse.de> References: <20221019083708.27138-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On entry of padata_do_serial(), the in-flight padata_priv owns a reference to the associated parallel_data instance. However, as soon as the padata_priv got enqueued on the reorder list, it can be completed from a different context, causing the reference to get released in the course. This would potentially cause UAFs from the subsequent padata_reorder() operations invoked from the enqueueing padata_do_serial() or from the reorder work. Note that this is a purely theroretical concern, the problem has never been actually observed -- it would require multiple pcrypt request submissions racing against each other, ultimately a pcrypt instance destruction (DELALG) short after request completions as well as unfortunate timing. However, for the sake of correctness, it is still worth fixing. Make padata_do_serial() grab a reference count on the parallel_data for the subsequent reorder operation(s). As long as the padata_priv has not been enqueued, this is safe, because as mentioned above, in-flight pdata_privs own a reference already. Note that padata_reorder() might schedule another padata_reorder() work and thus, care must be taken to not prematurely release that "reorder refcount" from padata_do_serial() again in case that has happened. Make padata_reorder() return a bool for indicating whether or not a reorder work has been scheduled. Let padata_do_serial() drop its refcount only if this is not the case. Accordingly, make the reorder work handler, invoke_padata_reorder(), drop it then as appropriate. A remark on the commit chosen for the Fixes tag reference below: before commit bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance padata queues"), the padata_parallel lifetime had been tied to the padata_instance. The padata_free() resp. padata_stop() issued a synchronize_rcu() before padata_free_pd() from the instance destruction path, rendering UAFs from the padata_do_serial()=>padata_reorder() invocations with BHs disabled impossible AFAICS. With that, the padata_reorder() work remains to be considered. Before commit b128a3040935 ("padata: allocate workqueue internally"), the workqueue got destroyed (from pcrypt), hence drained, before the padata instance destruction, but this change moved that to after the padata_free_pd() invocation from __padata_free(). So, while the Fixes reference from below is most likely technically correct, I would still like to reiterate that this problem is probably hard to trigger in practice, even more so before commit bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance padata queues"). Fixes: b128a3040935 ("padata: allocate workqueue internally") Signed-off-by: Nicolai Stange --- kernel/padata.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/kernel/padata.c b/kernel/padata.c index 0bf8c80dad5a..b79226727ef7 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -275,7 +275,7 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd, return padata; } -static void padata_reorder(struct parallel_data *pd) +static bool padata_reorder(struct parallel_data *pd) { struct padata_instance *pinst = pd->ps->pinst; int cb_cpu; @@ -294,7 +294,7 @@ static void padata_reorder(struct parallel_data *pd) * care for all the objects enqueued during the holdtime of the lock. */ if (!spin_trylock_bh(&pd->lock)) - return; + return false; while (1) { padata = padata_find_next(pd, true); @@ -331,17 +331,23 @@ static void padata_reorder(struct parallel_data *pd) reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); if (!list_empty(&reorder->list) && padata_find_next(pd, false)) - queue_work(pinst->serial_wq, &pd->reorder_work); + return queue_work(pinst->serial_wq, &pd->reorder_work); + + return false; } static void invoke_padata_reorder(struct work_struct *work) { struct parallel_data *pd; + bool keep_refcnt; local_bh_disable(); pd = container_of(work, struct parallel_data, reorder_work); - padata_reorder(pd); + keep_refcnt = padata_reorder(pd); local_bh_enable(); + + if (!keep_refcnt) + padata_put_pd(pd); } static void padata_serial_worker(struct work_struct *serial_work) @@ -392,6 +398,15 @@ void padata_do_serial(struct padata_priv *padata) struct padata_list *reorder = per_cpu_ptr(pd->reorder_list, hashed_cpu); struct padata_priv *cur; + /* + * The in-flight padata owns a reference on pd. However, as + * soon as it's been enqueued on the reorder list, another + * task can dequeue and complete it, thereby dropping the + * reference. Grab another reference here, it will eventually + * be released from a reorder work, if any, or below. + */ + padata_get_pd(pd); + spin_lock(&reorder->lock); /* Sort in ascending order of sequence number. */ list_for_each_entry_reverse(cur, &reorder->list, list) @@ -407,7 +422,8 @@ void padata_do_serial(struct padata_priv *padata) */ smp_mb(); - padata_reorder(pd); + if (!padata_reorder(pd)) + padata_put_pd(pd); } EXPORT_SYMBOL(padata_do_serial); From patchwork Wed Oct 19 08:37:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 616871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFD1AC43217 for ; Wed, 19 Oct 2022 08:37:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230178AbiJSIhs (ORCPT ); Wed, 19 Oct 2022 04:37:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbiJSIhd (ORCPT ); Wed, 19 Oct 2022 04:37:33 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4EF65EDE4; Wed, 19 Oct 2022 01:37:28 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 073C320349; Wed, 19 Oct 2022 08:37:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1666168647; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yd/Cz7xu7O0uYFSJIsi+cQi2wPBg7XoVTD7DAavgcck=; b=AJoW05FdOrhSCkvVGIYiJtEYpVuK2eUc789q3h5qhlHLBOZn/TPSp0Zxa9sygudLbNJrLq N3d/z9UtJLO+2PmCC7tPFBlf6XpLn2ZZ2KZv4u95foN8Fn9n5+z6ZaqalIRNwZ9Ne9kWpK azob9ET8tcbALqgeubaUkkDx2n9HszA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1666168647; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yd/Cz7xu7O0uYFSJIsi+cQi2wPBg7XoVTD7DAavgcck=; b=SBs6kDwv+FFaNVPRqtc7mgWsTsKZSfAXcP/ImceBIh10ZdkTkzfti2G3K5VY0ig43Fmx08 DEtnWK5+w4z1MFCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id ECCAD13345; Wed, 19 Oct 2022 08:37:26 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WICWOUa3T2NaZQAAMHmgww (envelope-from ); Wed, 19 Oct 2022 08:37:26 +0000 From: Nicolai Stange To: Steffen Klassert , Daniel Jordan Cc: Herbert Xu , Martin Doucha , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Nicolai Stange Subject: [PATCH 4/5] padata: split out dequeue operation from padata_find_next() Date: Wed, 19 Oct 2022 10:37:07 +0200 Message-Id: <20221019083708.27138-5-nstange@suse.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221019083708.27138-1-nstange@suse.de> References: <20221019083708.27138-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Currently, padata_find_next() takes a 'remove_object' argument for specifying whether the caller wants the returned patada_priv, if any, to get removed from the percpu reorder list it's been found on. There are only two callsites, both from padata_reorder(): - one supposed to dequeue the padata_priv instances to be processed in a loop, i.e. it has the 'remove_object' set to true, and - another one near the end of padata_reorder() with 'remove_object' set to false for checking whether the reorder work needs to get rescheduled. In order to deal with lifetime issues, a future commit will need to move this latter reorder work scheduling operation to under the reorder->lock, where pd->ps is guaranteed to exist as long as there are any padata_privs to process. However this lock is currently taken within padata_find_next(). In order to be able to extend the reorder->lock to beyond the call to padata_find_next() from padata_reorder(), a variant where the caller may grab it for the callee shall be provided. Split padata_find_next() into two parts: - __padata_find_next(), which expects the caller to hold the reorder->lock and only returns the found padata_priv, if any, without removing it from the queue. - padata_dequeue_next(), with functionality equivalent to the former padata_find_next(pd, remove_object=true) and implemented by means of the factored out __padata_find_next(). Adapt the two callsites in padata_reorder() as appropriate. There is no change in functionality. Signed-off-by: Nicolai Stange --- kernel/padata.c | 57 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 37 insertions(+), 20 deletions(-) diff --git a/kernel/padata.c b/kernel/padata.c index b79226727ef7..e9eab3e94cfc 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -230,29 +230,24 @@ int padata_do_parallel(struct padata_shell *ps, EXPORT_SYMBOL(padata_do_parallel); /* - * padata_find_next - Find the next object that needs serialization. + * __padata_find_next - Find the next object that needs serialization. * * Return: * * A pointer to the control struct of the next object that needs - * serialization, if present in one of the percpu reorder queues. + * serialization, if already present on the given percpu reorder queue. * * NULL, if the next object that needs serialization will * be parallel processed by another cpu and is not yet present in - * the cpu's reorder queue. + * the reorder queue. */ -static struct padata_priv *padata_find_next(struct parallel_data *pd, - bool remove_object) +static struct padata_priv *__padata_find_next(struct parallel_data *pd, + struct padata_list *reorder) { struct padata_priv *padata; - struct padata_list *reorder; - int cpu = pd->cpu; - reorder = per_cpu_ptr(pd->reorder_list, cpu); + lockdep_assert_held(&reorder->lock); - spin_lock(&reorder->lock); - if (list_empty(&reorder->list)) { - spin_unlock(&reorder->lock); + if (list_empty(&reorder->list)) return NULL; - } padata = list_entry(reorder->list.next, struct padata_priv, list); @@ -260,16 +255,30 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd, * Checks the rare case where two or more parallel jobs have hashed to * the same CPU and one of the later ones finishes first. */ - if (padata->seq_nr != pd->processed) { + if (padata->seq_nr != pd->processed) + return NULL; + + return padata; +} + +static struct padata_priv *padata_dequeue_next(struct parallel_data *pd) +{ + struct padata_priv *padata; + struct padata_list *reorder; + int cpu = pd->cpu; + + reorder = per_cpu_ptr(pd->reorder_list, cpu); + spin_lock(&reorder->lock); + + padata = __padata_find_next(pd, reorder); + if (!padata) { spin_unlock(&reorder->lock); return NULL; } - if (remove_object) { - list_del_init(&padata->list); - ++pd->processed; - pd->cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu, -1, false); - } + list_del_init(&padata->list); + ++pd->processed; + pd->cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu, -1, false); spin_unlock(&reorder->lock); return padata; @@ -297,7 +306,7 @@ static bool padata_reorder(struct parallel_data *pd) return false; while (1) { - padata = padata_find_next(pd, true); + padata = padata_dequeue_next(pd); /* * If the next object that needs serialization is parallel @@ -330,8 +339,16 @@ static bool padata_reorder(struct parallel_data *pd) smp_mb(); reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); - if (!list_empty(&reorder->list) && padata_find_next(pd, false)) + if (!list_empty(&reorder->list)) { + spin_lock(&reorder->lock); + if (!__padata_find_next(pd, reorder)) { + spin_unlock(&reorder->lock); + return false; + } + spin_unlock(&reorder->lock); + return queue_work(pinst->serial_wq, &pd->reorder_work); + } return false; } From patchwork Wed Oct 19 08:37:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 616504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 519C7C433FE for ; Wed, 19 Oct 2022 08:37:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230201AbiJSIh4 (ORCPT ); Wed, 19 Oct 2022 04:37:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230137AbiJSIhd (ORCPT ); Wed, 19 Oct 2022 04:37:33 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 851967E31B; Wed, 19 Oct 2022 01:37:30 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 8A774339DA; Wed, 19 Oct 2022 08:37:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1666168648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qPk+7KJG/hENNIRans/TaYIf8cB/olSwAdJS6bByusQ=; b=sbu5QmuD5hMfVso6xNbaey8BXdTICxX0QJIyj+zGamtTVZA3XrzfkmprCbaJCQof4QagCt UMDPeanrrGRLIoH6oVp4Wm/aY6XA/94xo7RFPyNdAasI9nOZvs+MBi/023hxnrAJ8WvuhK qG6MICJ2dw5PaqdZZTSnHE2IcOVc6Sc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1666168648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qPk+7KJG/hENNIRans/TaYIf8cB/olSwAdJS6bByusQ=; b=huKMRmkzkKOWK6c3+pNQ++3VFnwwWIJMujRYg2MGpRG6My0bGRwOBCLlxXYsfQnm008dSd UUpdKfJqF3WL6gAw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 76EF613345; Wed, 19 Oct 2022 08:37:28 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id lv4HHUi3T2NfZQAAMHmgww (envelope-from ); Wed, 19 Oct 2022 08:37:28 +0000 From: Nicolai Stange To: Steffen Klassert , Daniel Jordan Cc: Herbert Xu , Martin Doucha , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Nicolai Stange Subject: [PATCH 5/5] padata: avoid potential UAFs to the padata_shell from padata_reorder() Date: Wed, 19 Oct 2022 10:37:08 +0200 Message-Id: <20221019083708.27138-6-nstange@suse.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221019083708.27138-1-nstange@suse.de> References: <20221019083708.27138-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the parallel_data "pd" instance passed to padata_reorder() is guaranteed to exist as per the reference held by its callers, the same is not true for the associated padata_shell, pd->ps. More specifically, once the last padata_priv request has been completed, either at entry from padata_reorder() or concurrently to it, the padata API users are well within their right to free the padata_shell instance. Note that this is a purely theoretical issue, it has not been actually observed. Yet it's worth fixing for the sake of robustness. Exploit the fact that as long as there are any not yet completed padata_priv's around on any of the percpu reorder queues, pd->ps is guaranteed to exist. Make padata_reorder() to load from pd->ps only when it's known that there is at least one in-flight padata_priv object to reorder. Note that this involves moving pd->ps accesses to under the reorder->lock as appropriate, so that the found padata_priv object won't get dequeued and completed concurrently from a different context. Fixes: bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance padata queues") Signed-off-by: Nicolai Stange --- kernel/padata.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/kernel/padata.c b/kernel/padata.c index e9eab3e94cfc..fa4818b81eca 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -286,7 +286,6 @@ static struct padata_priv *padata_dequeue_next(struct parallel_data *pd) static bool padata_reorder(struct parallel_data *pd) { - struct padata_instance *pinst = pd->ps->pinst; int cb_cpu; struct padata_priv *padata; struct padata_serial_queue *squeue; @@ -323,7 +322,11 @@ static bool padata_reorder(struct parallel_data *pd) list_add_tail(&padata->list, &squeue->serial.list); spin_unlock(&squeue->serial.lock); - queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work); + /* + * Note: as long as there are requests in-flight, + * pd->ps is guaranteed to exist. + */ + queue_work_on(cb_cpu, pd->ps->pinst->serial_wq, &squeue->work); } spin_unlock_bh(&pd->lock); @@ -340,14 +343,23 @@ static bool padata_reorder(struct parallel_data *pd) reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); if (!list_empty(&reorder->list)) { + bool reenqueued; + spin_lock(&reorder->lock); if (!__padata_find_next(pd, reorder)) { spin_unlock(&reorder->lock); return false; } + + /* + * Note: as long as there are requests in-flight, + * pd->ps is guaranteed to exist. + */ + reenqueued = queue_work(pd->ps->pinst->serial_wq, + &pd->reorder_work); spin_unlock(&reorder->lock); - return queue_work(pinst->serial_wq, &pd->reorder_work); + return reenqueued; } return false;