From patchwork Mon Feb 10 12:34:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 231677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3D62C352A3 for ; Mon, 10 Feb 2020 13:27:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CD7562465D for ; Mon, 10 Feb 2020 13:27:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581341269; bh=ubRE2SZEaEGCR094AMb1RNGp86l3lXPp5PS9AdISMv4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=LmPQDO/5Py8B+K3KZg3Bm6V91l8K9kCtH0jOa5wQcBOY4WX4Gzo9O0iSJqFJaWXrb NMsEytK73p50YOe+hB9DcJQUvfFs8eALGolwQo3t0h6xSBWnKkb7VJrRwozUMstwzo 4+pZgraqGuSRjrOiS272uqhnwXMgbtfN7EiebYjw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728407AbgBJN1m (ORCPT ); Mon, 10 Feb 2020 08:27:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:55906 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728406AbgBJMgc (ORCPT ); Mon, 10 Feb 2020 07:36:32 -0500 Received: from localhost (unknown [209.37.97.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AEB5F2085B; Mon, 10 Feb 2020 12:36:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581338191; bh=ubRE2SZEaEGCR094AMb1RNGp86l3lXPp5PS9AdISMv4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SWZRCfHUsAbfm3id9EeZpaulpRVznxIUIzdhpJIKQPQEJ4yb/co2en2JTy7GFjMqd G1mY4XDiyRp4UjIuwAu7WeYbSsMcCxiHYTiA+avF6KqWgmzWPeDkO14JJVyXltDyfq 0cAPbGnQmN4FAu7auIYKqC5E9DoNKSfUugwkHmy4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Alexander Shishkin , Song Liu , "Peter Zijlstra (Intel)" , Ingo Molnar Subject: [PATCH 4.19 194/195] perf/core: Fix mlock accounting in perf_mmap() Date: Mon, 10 Feb 2020 04:34:12 -0800 Message-Id: <20200210122324.124825345@linuxfoundation.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200210122305.731206734@linuxfoundation.org> References: <20200210122305.731206734@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Song Liu commit 003461559ef7a9bd0239bae35a22ad8924d6e9ad upstream. Decreasing sysctl_perf_event_mlock between two consecutive perf_mmap()s of a perf ring buffer may lead to an integer underflow in locked memory accounting. This may lead to the undesired behaviors, such as failures in BPF map creation. Address this by adjusting the accounting logic to take into account the possibility that the amount of already locked memory may exceed the current limit. Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again") Suggested-by: Alexander Shishkin Signed-off-by: Song Liu Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Cc: Acked-by: Alexander Shishkin Link: https://lkml.kernel.org/r/20200123181146.2238074-1-songliubraving@fb.com Signed-off-by: Greg Kroah-Hartman --- kernel/events/core.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5709,7 +5709,15 @@ accounting: */ user_lock_limit *= num_online_cpus(); - user_locked = atomic_long_read(&user->locked_vm) + user_extra; + user_locked = atomic_long_read(&user->locked_vm); + + /* + * sysctl_perf_event_mlock may have changed, so that + * user->locked_vm > user_lock_limit + */ + if (user_locked > user_lock_limit) + user_locked = user_lock_limit; + user_locked += user_extra; if (user_locked > user_lock_limit) extra = user_locked - user_lock_limit;