From patchwork Mon Aug 3 12:19:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 266916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C08CC433E3 for ; Mon, 3 Aug 2020 12:24:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2BD50207DF for ; Mon, 3 Aug 2020 12:24:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596457484; bh=2MLEGzVbKjFBrgz5Z/lo+QYmUTrppw7aFMb2gaCkKGE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=hcvmKwxRzHUV8RdSwgbitqXcLM7AAsdgrQK0YJlPsw6R9OtIZhM8Gwcvp43mZMYf9 qoNdMG8EIfFGeHU/g05QpAN0qFEYnmlP2o9crBMoyNs4r+OVKvuM5oKFUkxS0ILafd hY1/De1/qaBEVk2iWsBRjMtrXa8i8NpcwlhxFgks= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728060AbgHCMYn (ORCPT ); Mon, 3 Aug 2020 08:24:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:49106 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726935AbgHCMYm (ORCPT ); Mon, 3 Aug 2020 08:24:42 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 90744204EC; Mon, 3 Aug 2020 12:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596457481; bh=2MLEGzVbKjFBrgz5Z/lo+QYmUTrppw7aFMb2gaCkKGE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2S8EwzMbAsCbN27BKx+Ow72vTBWVg6lr2Ky8wVrzJI0S6XqJyBq4hIabphAyi7lD0 Jhlud55HDiuTl3XdzdqcDE/IN69ExfUjZ0uaponToACXn6flw/UP/BlaXlcM6D918u jNMgXA/UsB5W5pevWTimzTQsWT0y4NjB06bOulqk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, guodeqing , Robin Murphy , Will Deacon , Sasha Levin Subject: [PATCH 5.7 083/120] arm64: csum: Fix handling of bad packets Date: Mon, 3 Aug 2020 14:19:01 +0200 Message-Id: <20200803121906.911884315@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200803121902.860751811@linuxfoundation.org> References: <20200803121902.860751811@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Robin Murphy [ Upstream commit 05fb3dbda187bbd9cc1cd0e97e5d6595af570ac6 ] Although iph is expected to point to at least 20 bytes of valid memory, ihl may be bogus, for example on reception of a corrupt packet. If it happens to be less than 5, we really don't want to run away and dereference 16GB worth of memory until it wraps back to exactly zero... Fixes: 0e455d8e80aa ("arm64: Implement optimised IP checksum helpers") Reported-by: guodeqing Signed-off-by: Robin Murphy Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- arch/arm64/include/asm/checksum.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/checksum.h b/arch/arm64/include/asm/checksum.h index b6f7bc6da5fb3..93a161b3bf3fe 100644 --- a/arch/arm64/include/asm/checksum.h +++ b/arch/arm64/include/asm/checksum.h @@ -24,16 +24,17 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl) { __uint128_t tmp; u64 sum; + int n = ihl; /* we want it signed */ tmp = *(const __uint128_t *)iph; iph += 16; - ihl -= 4; + n -= 4; tmp += ((tmp >> 64) | (tmp << 64)); sum = tmp >> 64; do { sum += *(const u32 *)iph; iph += 4; - } while (--ihl); + } while (--n > 0); sum += ((sum >> 32) | (sum << 32)); return csum_fold((__force u32)(sum >> 32));