From patchwork Wed May 3 08:54:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 98473 Delivered-To: patch@linaro.org Received: by 10.140.89.200 with SMTP id v66csp195368qgd; Wed, 3 May 2017 01:56:03 -0700 (PDT) X-Received: by 10.99.149.8 with SMTP id p8mr11085488pgd.154.1493801763728; Wed, 03 May 2017 01:56:03 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n190si1917573pfn.81.2017.05.03.01.56.03; Wed, 03 May 2017 01:56:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752749AbdECI4C (ORCPT + 6 others); Wed, 3 May 2017 04:56:02 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:5855 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752129AbdECI4B (ORCPT ); Wed, 3 May 2017 04:56:01 -0400 Received: from 172.30.72.54 (EHLO dggeml405-hub.china.huawei.com) ([172.30.72.54]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AMV97323; Wed, 03 May 2017 16:55:48 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggeml405-hub.china.huawei.com (10.3.17.49) with Microsoft SMTP Server id 14.3.301.0; Wed, 3 May 2017 16:55:07 +0800 From: Kefeng Wang To: CC: , , , Kefeng Wang Subject: [PATCH redhat-7.3.x ] perf/x86/intel/cqm: Make sure events without RMID are always in the tail of cache_groups Date: Wed, 3 May 2017 16:54:48 +0800 Message-ID: <1493801688-58971-1-git-send-email-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <59093386.70400@huawei.com> References: <59093386.70400@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020201.59099B19.0003, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 6f9775ce6c66a2db045fb3731e1685ef Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Zefan Li euler inclusion category: bugfix bugzilla: NA DTS: DTS2017030810544 CVE: NA ------------------------------------------------- It is assumed that the head of cache_groups always has valid RMID, which isn't true. When we deallocate RMID from conflicting events currently we don't move them to the tail, and one of those events can happen to be in the head. Another case is we allocate RMIDs for all the events except the head event in intel_cqm_sched_in_event(). Besides there's another bug that we retry rotating without reseting nr_needed and start in __intel_cqm_rmid_rotate(). Those bugs combined together lead to the following oops. WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 __put_rmid+0x28/0x80() ... [] __put_rmid+0x28/0x80 [] intel_cqm_rmid_rotate+0xba/0x440 [] process_one_work+0x17b/0x470 [] worker_thread+0x11b/0x400 ... BUG: unable to handle kernel NULL pointer dereference at (null) ... [] intel_cqm_rmid_rotate+0xba/0x440 [] process_one_work+0x17b/0x470 [] worker_thread+0x11b/0x400 Cc: stable@vger.kernel.org Signed-off-by: Zefan Li [ kf: adjust file patch and name ] Signed-off-by: Kefeng Wang --- arch/x86/events/intel/cqm.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) -- 1.8.3.1 diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c index 0626f29..d15a0a0 100644 --- a/arch/x86/events/intel/cqm.c +++ b/arch/x86/events/intel/cqm.c @@ -568,6 +568,12 @@ static bool intel_cqm_sched_in_event(u32 rmid) leader = list_first_entry(&cache_groups, struct perf_event, hw.cqm_groups_entry); + + if (!list_empty(&cache_groups) && !__rmid_valid(leader->hw.cqm_rmid)) { + intel_cqm_xchg_rmid(leader, rmid); + return true; + } + event = leader; list_for_each_entry_continue(event, &cache_groups, @@ -736,6 +742,7 @@ static void intel_cqm_sched_out_conflicting_events(struct perf_event *event) { struct perf_event *group, *g; u32 rmid; + LIST_HEAD(conflicting_groups); lockdep_assert_held(&cache_mutex); @@ -759,6 +766,7 @@ static void intel_cqm_sched_out_conflicting_events(struct perf_event *event) intel_cqm_xchg_rmid(group, INVALID_RMID); __put_rmid(rmid); + list_move_tail(&group->hw.cqm_groups_entry, &conflicting_groups); } } @@ -788,9 +796,9 @@ static void intel_cqm_sched_out_conflicting_events(struct perf_event *event) */ static bool __intel_cqm_rmid_rotate(void) { - struct perf_event *group, *start = NULL; + struct perf_event *group, *start; unsigned int threshold_limit; - unsigned int nr_needed = 0; + unsigned int nr_needed; unsigned int nr_available; bool rotated = false; @@ -804,6 +812,8 @@ again: if (list_empty(&cache_groups) && list_empty(&cqm_rmid_limbo_lru)) goto out; + nr_needed = 0; + start = NULL; list_for_each_entry(group, &cache_groups, hw.cqm_groups_entry) { if (!__rmid_valid(group->hw.cqm_rmid)) { if (!start)