From patchwork Thu Apr 16 13:25:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 227863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF2A7C2BB55 for ; Thu, 16 Apr 2020 13:42:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CC6C32222C for ; Thu, 16 Apr 2020 13:42:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587044553; bh=sM43j8+jjyYeXh9A6RDE8bPlJelQvkGDSH/i2ELAM/E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=g+/4w+gTKz4IbUXd3zkXPZvGYJWN8uPoQiMRwbrJt71hnz6alMOp8wAH0UkRrEmAK ph6TkBai8S4cobrPTINoB37JXKGtL9O2Csqtl4JAh8cOZQYIwB/jvlmilgFNq5cia0 hyPljdSBmKsFoVVuDFnNeMPAplpFu0hj06MtbMnI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2407373AbgDPNm2 (ORCPT ); Thu, 16 Apr 2020 09:42:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:55406 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405440AbgDPNmZ (ORCPT ); Thu, 16 Apr 2020 09:42:25 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 72BAD218AC; Thu, 16 Apr 2020 13:42:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587044545; bh=sM43j8+jjyYeXh9A6RDE8bPlJelQvkGDSH/i2ELAM/E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DPmWoVyg0zqu5MOHxH6qWZqzjVLzUihfxzz1U91JmUZPWsYawsN8r7///S7CQXr6+ ACHkq33aFD5s2wm/Wverm12Y8QxfDOSw4DwOlZBs1957Iidq/YPfIAzifvsXF3A7SP iOZdpUvPCv0uXq+mQKs8UBI+7xDP5mvyyJf+CgxA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Peter Zijlstra (Intel)" , Ingo Molnar , Sasha Levin Subject: [PATCH 5.5 252/257] perf/core: Unify {pinned,flexible}_sched_in() Date: Thu, 16 Apr 2020 15:25:03 +0200 Message-Id: <20200416131357.043818675@linuxfoundation.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200416131325.891903893@linuxfoundation.org> References: <20200416131325.891903893@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra [ Upstream commit ab6f824cfdf7363b5e529621cbc72ae6519c78d1 ] Less is more; unify the two very nearly identical function. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- kernel/events/core.c | 58 ++++++++++++++++---------------------------- 1 file changed, 21 insertions(+), 37 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index fdb7f7ef380c4..b3d4f485bcfa6 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1986,6 +1986,12 @@ static int perf_get_aux_event(struct perf_event *event, return 1; } +static inline struct list_head *get_event_list(struct perf_event *event) +{ + struct perf_event_context *ctx = event->ctx; + return event->attr.pinned ? &ctx->pinned_active : &ctx->flexible_active; +} + static void perf_group_detach(struct perf_event *event) { struct perf_event *sibling, *tmp; @@ -2028,12 +2034,8 @@ static void perf_group_detach(struct perf_event *event) if (!RB_EMPTY_NODE(&event->group_node)) { add_event_to_groups(sibling, event->ctx); - if (sibling->state == PERF_EVENT_STATE_ACTIVE) { - struct list_head *list = sibling->attr.pinned ? - &ctx->pinned_active : &ctx->flexible_active; - - list_add_tail(&sibling->active_list, list); - } + if (sibling->state == PERF_EVENT_STATE_ACTIVE) + list_add_tail(&sibling->active_list, get_event_list(sibling)); } WARN_ON_ONCE(sibling->ctx != event->ctx); @@ -2350,6 +2352,8 @@ event_sched_in(struct perf_event *event, { int ret = 0; + WARN_ON_ONCE(event->ctx != ctx); + lockdep_assert_held(&ctx->lock); if (event->state <= PERF_EVENT_STATE_OFF) @@ -3425,10 +3429,12 @@ struct sched_in_data { int can_add_hw; }; -static int pinned_sched_in(struct perf_event *event, void *data) +static int merge_sched_in(struct perf_event *event, void *data) { struct sched_in_data *sid = data; + WARN_ON_ONCE(event->ctx != sid->ctx); + if (event->state <= PERF_EVENT_STATE_OFF) return 0; @@ -3437,37 +3443,15 @@ static int pinned_sched_in(struct perf_event *event, void *data) if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) { if (!group_sched_in(event, sid->cpuctx, sid->ctx)) - list_add_tail(&event->active_list, &sid->ctx->pinned_active); + list_add_tail(&event->active_list, get_event_list(event)); } - /* - * If this pinned group hasn't been scheduled, - * put it in error state. - */ - if (event->state == PERF_EVENT_STATE_INACTIVE) - perf_event_set_state(event, PERF_EVENT_STATE_ERROR); - - return 0; -} - -static int flexible_sched_in(struct perf_event *event, void *data) -{ - struct sched_in_data *sid = data; - - if (event->state <= PERF_EVENT_STATE_OFF) - return 0; - - if (!event_filter_match(event)) - return 0; + if (event->state == PERF_EVENT_STATE_INACTIVE) { + if (event->attr.pinned) + perf_event_set_state(event, PERF_EVENT_STATE_ERROR); - if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) { - int ret = group_sched_in(event, sid->cpuctx, sid->ctx); - if (ret) { - sid->can_add_hw = 0; - sid->ctx->rotate_necessary = 1; - return 0; - } - list_add_tail(&event->active_list, &sid->ctx->flexible_active); + sid->can_add_hw = 0; + sid->ctx->rotate_necessary = 1; } return 0; @@ -3485,7 +3469,7 @@ ctx_pinned_sched_in(struct perf_event_context *ctx, visit_groups_merge(&ctx->pinned_groups, smp_processor_id(), - pinned_sched_in, &sid); + merge_sched_in, &sid); } static void @@ -3500,7 +3484,7 @@ ctx_flexible_sched_in(struct perf_event_context *ctx, visit_groups_merge(&ctx->flexible_groups, smp_processor_id(), - flexible_sched_in, &sid); + merge_sched_in, &sid); } static void