From patchwork Mon Jul 12 06:08:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 475687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6371C07E9C for ; Mon, 12 Jul 2021 07:20:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B851B611BF for ; Mon, 12 Jul 2021 07:20:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239615AbhGLHWw (ORCPT ); Mon, 12 Jul 2021 03:22:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:55956 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240785AbhGLHTK (ORCPT ); Mon, 12 Jul 2021 03:19:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 575386144A; Mon, 12 Jul 2021 07:16:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626074176; bh=BZg1ZmptamxlKGDCux2iZ+5DEortB9FdmYQHKVSzNrM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eakU5Q5I3PSlQzMFIzOukgX1SIF1kwaaOsClsJgqnjPzRjufCJW5YOuqObwI9F54f GRQY3W5UdBIV+GRCPznJzmS9778eBD4wuRcore7UAT4nRxP5VTPRhEpnm+g3KAsLfj w3mfYVLhaAUFbQVZcQV1d90Z5TV7UGTxXm1MZp8g= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vadim Fedorenko , Seth Forshee , Jakub Kicinski , "David S. Miller" , Sasha Levin Subject: [PATCH 5.12 448/700] tls: prevent oversized sendfile() hangs by ignoring MSG_MORE Date: Mon, 12 Jul 2021 08:08:51 +0200 Message-Id: <20210712061024.193653185@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210712060924.797321836@linuxfoundation.org> References: <20210712060924.797321836@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jakub Kicinski [ Upstream commit d452d48b9f8b1a7f8152d33ef52cfd7fe1735b0a ] We got multiple reports that multi_chunk_sendfile test case from tls selftest fails. This was sort of expected, as the original fix was never applied (see it in the first Link:). The test in question uses sendfile() with count larger than the size of the underlying file. This will make splice set MSG_MORE on all sendpage calls, meaning TLS will never close and flush the last partial record. Eric seem to have addressed a similar problem in commit 35f9c09fe9c7 ("tcp: tcp_sendpages() should call tcp_push() once") by introducing MSG_SENDPAGE_NOTLAST. Unlike MSG_MORE MSG_SENDPAGE_NOTLAST is not set on the last call of a "pipefull" of data (PIPE_DEF_BUFFERS == 16, so every 16 pages or whenever we run out of data). Having a break every 16 pages should be fine, TLS can pack exactly 4 pages into a record, so for aligned reads there should be no difference, unaligned may see one extra record per sendpage(). Sticking to TCP semantics seems preferable to modifying splice, but we can revisit it if real life scenarios show a regression. Reported-by: Vadim Fedorenko Reported-by: Seth Forshee Link: https://lore.kernel.org/netdev/1591392508-14592-1-git-send-email-pooja.trivedi@stackpath.com/ Fixes: 3c4d7559159b ("tls: kernel TLS support") Signed-off-by: Jakub Kicinski Tested-by: Seth Forshee Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tls/tls_sw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 6086cf4f10a7..60d2ff13fa9e 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -1153,7 +1153,7 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page, int ret = 0; bool eor; - eor = !(flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST)); + eor = !(flags & MSG_SENDPAGE_NOTLAST); sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); /* Call the sk_stream functions to manage the sndbuf mem. */