From patchwork Thu Sep 8 12:46:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chunyan Zhang X-Patchwork-Id: 75782 Delivered-To: patch@linaro.org Received: by 10.140.106.11 with SMTP id d11csp825169qgf; Thu, 8 Sep 2016 05:47:01 -0700 (PDT) X-Received: by 10.66.85.196 with SMTP id j4mr23671300paz.40.1473338821732; Thu, 08 Sep 2016 05:47:01 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q3si46958480pae.284.2016.09.08.05.47.01; Thu, 08 Sep 2016 05:47:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S941145AbcIHMq7 (ORCPT + 27 others); Thu, 8 Sep 2016 08:46:59 -0400 Received: from mail-pa0-f46.google.com ([209.85.220.46]:35268 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932478AbcIHMq5 (ORCPT ); Thu, 8 Sep 2016 08:46:57 -0400 Received: by mail-pa0-f46.google.com with SMTP id b2so17234713pat.2 for ; Thu, 08 Sep 2016 05:46:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=VAupS1EwRHQqU4Uad6xD5LC4Zf3aEzNnGCuK0CodPvg=; b=Hif6VtLtqZLorOtnkwCtCHRF66uaiNmMdFnd5k/jTaPc9oLm1+zd88ZRGC/M9dUp+r rAiJPhoqNUmYTNbhEoui7+r9+kBbAvN1J9iW1rZbEotI2TfdSkWpr//AmxFDDlS5SO9w 1OdKz5KjsCHf1mPkX2zZgmcezCgFLA2J4enA4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=VAupS1EwRHQqU4Uad6xD5LC4Zf3aEzNnGCuK0CodPvg=; b=eCrxcpi2nT8joXolP29dUlUDEGqJJpWd6IPosrmkTH4vk/eACsq4JnvPA5vQNi3uvj 8RjdFfSqjMi9cx92fktN+bsyJC43EQOT2Ll4uikxqz6UzHRvkdeN1VFmrL3DPn2+Esp9 I0ng7k+o3rfAY8ZccNa4eH6t8uhdyMAa/oQMEJTvlC23s1To2Gy/wRWRb3qakmmDCfwv X8Kw878pKbh4pqjx9Nsy5hu5BFwvp0h1EmgGBkv4vrPB3XtW2ftATKZzqjoOoiLIIbmK 2NU9VKpm/YYFS6A2TIH9KlXSvPI2IB0we6NbIT5/fvPgxkB5tiFq+dApzlLXKK+ymJh+ EygQ== X-Gm-Message-State: AE9vXwPp65jBJUGGMrOVXqDBjlT6Ew6EdLpBkeB8toCcjjZ9S4QDRCH+qEMLadmdhMJuP153 X-Received: by 10.66.10.162 with SMTP id j2mr49996104pab.3.1473338817058; Thu, 08 Sep 2016 05:46:57 -0700 (PDT) Received: from ubuntu16.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id an11sm56353453pac.26.2016.09.08.05.46.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 08 Sep 2016 05:46:56 -0700 (PDT) From: Chunyan Zhang To: rostedt@goodmis.org, mingo@redhat.com Cc: zhang.lyra@gmail.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, takahiro.akashi@linaro.org, mark.yang@spreadtrum.com Subject: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write Date: Thu, 8 Sep 2016 20:46:42 +0800 Message-Id: <1473338802-18712-1-git-send-email-zhang.chunyan@linaro.org> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When debug preempt or preempt tracer is enabled, preempt_count_add/sub() can be traced by function and function graph tracing, and preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace subsystem we should use preempt_disable/enable_notrace instead. In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap like events do") the function this_cpu_read() was added to trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph tracer will go into a recursive loop, even if the tracing_on is disabled. So this patch change to use preempt_enable/disable_notrace instead in this_cpu_read(). Since Yonghui Yang helped a lot to find the root cause of this problem, so also add his SOB. Signed-off-by: Yonghui Yang Signed-off-by: Chunyan Zhang --- arch/arm64/include/asm/percpu.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) -- 2.7.4 Acked-by: Will Deacon diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h index 0a456be..2fee2f5 100644 --- a/arch/arm64/include/asm/percpu.h +++ b/arch/arm64/include/asm/percpu.h @@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val, #define _percpu_read(pcp) \ ({ \ typeof(pcp) __retval; \ - preempt_disable(); \ + preempt_disable_notrace(); \ __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \ sizeof(pcp)); \ - preempt_enable(); \ + preempt_enable_notrace(); \ __retval; \ }) #define _percpu_write(pcp, val) \ do { \ - preempt_disable(); \ + preempt_disable_notrace(); \ __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \ sizeof(pcp)); \ - preempt_enable(); \ + preempt_enable_notrace(); \ } while(0) \ #define _pcp_protect(operation, pcp, val) \