From patchwork Fri Mar 25 02:22:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 555494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91018C433F5 for ; Fri, 25 Mar 2022 02:30:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357728AbiCYCcK (ORCPT ); Thu, 24 Mar 2022 22:32:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357725AbiCYCcK (ORCPT ); Thu, 24 Mar 2022 22:32:10 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16BDFB715F; Thu, 24 Mar 2022 19:30:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648175437; x=1679711437; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=PTNFCFzP7Qzi5GxtYWpCTaMvmu7DnLjllzpv1VI5L78=; b=dvUYMaKiNgtn6W/ia29VVjofGpV+T2IgPBrBvx1t1J9Cu0bjK2bUuv8M CLAhH5BcVkJVHNIZOQY8Tcs8yH92+XY0Q8QMo/T3YXVhTSuWDxJ2dmqhN rf0twNZ8FJqZTe8BK+4wciTu7fcbLtoScjWWpyI5XiPXqH4DyZC286l3H xT4hpeSD3O7KV2hv3/YM41fb/UJpuO+s4VFVwd8GqC3s2cJwt08xN/iRZ c2MrXQf1UK6KKc+1j1A7IofzKhisYOldM72HO5Lmr0PfRivVCub+LfK9m 8zOBIXZ6WJGfmmHc0Yyft8L94+otYk5CUxQYMyJ3DQnlx9+qbHfJtDuhj A==; X-IronPort-AV: E=McAfee;i="6200,9189,10296"; a="321733682" X-IronPort-AV: E=Sophos;i="5.90,209,1643702400"; d="scan'208";a="321733682" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2022 19:30:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,209,1643702400"; d="scan'208";a="693531279" Received: from chang-linux-3.sc.intel.com ([172.25.112.114]) by fmsmga001.fm.intel.com with ESMTP; 24 Mar 2022 19:30:35 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org Cc: tglx@linutronix.de, dave.hansen@linux.intel.com, peterz@infradead.org, bp@alien8.de, rafael@kernel.org, ravi.v.shankar@intel.com, chang.seok.bae@intel.com Subject: [PATCH v3 1/3] x86/fpu: Make XCR0 accessors immune to unwanted compiler reordering Date: Thu, 24 Mar 2022 19:22:17 -0700 Message-Id: <20220325022219.829-2-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220325022219.829-1-chang.seok.bae@intel.com> References: <20220325022219.829-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Some old GCC versions (4.9.x and 5.x) have an issue that incorrectly reordering volatile asm statements with each other [1]. While this bug was fixed on later versions (8.1, 7.3, and 6.5), and the kernel's current XCR0 read/write do not appear to be impacted, it is preventive to ensure them on the program order. Have a memory clobber for write to prevent caching/reordering memory accesses across other XCR0 writes. A dummy operand with an arbitrary address can prevent the compiler from reordering with other writes. Add the dummy operand for read as used for other accessors in aa5cacdc29d ("x86/asm: Replace __force_order with a memory clobber"). [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 Signed-off-by: Chang S. Bae Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v2: * Add as a new patch (Dave Hansen). --- arch/x86/include/asm/fpu/xcr.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/fpu/xcr.h b/arch/x86/include/asm/fpu/xcr.h index 9656a5bc6fea..9b513e7c0161 100644 --- a/arch/x86/include/asm/fpu/xcr.h +++ b/arch/x86/include/asm/fpu/xcr.h @@ -2,6 +2,8 @@ #ifndef _ASM_X86_FPU_XCR_H #define _ASM_X86_FPU_XCR_H +#include + #define XCR_XFEATURE_ENABLED_MASK 0x00000000 #define XCR_XFEATURE_IN_USE_MASK 0x00000001 @@ -9,7 +11,8 @@ static inline u64 xgetbv(u32 index) { u32 eax, edx; - asm volatile("xgetbv" : "=a" (eax), "=d" (edx) : "c" (index)); + asm volatile("xgetbv" : "=a" (eax), "=d" (edx) : "c" (index), + __FORCE_ORDER); return eax + ((u64)edx << 32); } @@ -18,7 +21,8 @@ static inline void xsetbv(u32 index, u64 value) u32 eax = value; u32 edx = value >> 32; - asm volatile("xsetbv" :: "a" (eax), "d" (edx), "c" (index)); + asm volatile("xsetbv" :: "a" (eax), "d" (edx), "c" (index) + : "memory"); } /* From patchwork Fri Mar 25 02:22:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 555493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACA23C433FE for ; Fri, 25 Mar 2022 02:30:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357751AbiCYCcM (ORCPT ); Thu, 24 Mar 2022 22:32:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357738AbiCYCcK (ORCPT ); Thu, 24 Mar 2022 22:32:10 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46E7AB7160; Thu, 24 Mar 2022 19:30:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648175437; x=1679711437; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=iedGPOCULJcEPMdGPip/uvztNGO9WqgsgV67R6Hlq9k=; b=cntP+OX+5XBtCoGPslSDe84cWHNEPMVJTnZJBfGdCBvQmHVTPAXfdX0+ qYvzH+z75OYo9c5hzr32nNySZQ7c6q+VlPpCNvwlUut84b6NxfURtYnV+ il5XXSMrGRos868I62R3CB7xShQ3rKNCOrZhQxv7CPPoz9NUs77TqlLK7 i76YZ7ybnlsmujAFPIttNjGYrzOU3en0eqx+Yln1VKxdrLbxklyrx40Rj Zq79shHvpSeXJXYQoFqnX1AVtOENeSTI09Z4GNXNB1A3zpUC9eK8F2dsD mn7NuUo79w25nEBW8/F3fVT8OP/+jn/WWihrTeKqyOwtBNt7oW37Cqhv2 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10296"; a="321733683" X-IronPort-AV: E=Sophos;i="5.90,209,1643702400"; d="scan'208";a="321733683" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2022 19:30:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,209,1643702400"; d="scan'208";a="693531282" Received: from chang-linux-3.sc.intel.com ([172.25.112.114]) by fmsmga001.fm.intel.com with ESMTP; 24 Mar 2022 19:30:35 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org Cc: tglx@linutronix.de, dave.hansen@linux.intel.com, peterz@infradead.org, bp@alien8.de, rafael@kernel.org, ravi.v.shankar@intel.com, chang.seok.bae@intel.com Subject: [PATCH v3 2/3] x86/fpu: Add a helper to prepare AMX state for low-power CPU idle Date: Thu, 24 Mar 2022 19:22:18 -0700 Message-Id: <20220325022219.829-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220325022219.829-1-chang.seok.bae@intel.com> References: <20220325022219.829-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org When a CPU enters an idle state, non-initialized states left in large registers may be the cause of preventing deeper low-power states. The new helper ensures the AMX state is initialized to make the CPU ready for low-power states. It will be used by the intel idle driver. Signed-off-by: Chang S. Bae Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v2: * Check the feature flag instead of fpu_state_size_dynamic() (Dave Hansen). Changes from v1: * Check the dynamic state flag first, to avoid #UD with XGETBV(1). --- arch/x86/include/asm/fpu/api.h | 2 ++ arch/x86/include/asm/special_insns.h | 9 +++++++++ arch/x86/kernel/fpu/core.c | 13 +++++++++++++ 3 files changed, 24 insertions(+) diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index c83b3020350a..df48912fd1c8 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -165,4 +165,6 @@ static inline bool fpstate_is_confidential(struct fpu_guest *gfpu) struct task_struct; extern long fpu_xstate_prctl(struct task_struct *tsk, int option, unsigned long arg2); +extern void fpu_idle_fpregs(void); + #endif /* _ASM_X86_FPU_API_H */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 68c257a3de0d..d434fbaeb3ff 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -294,6 +294,15 @@ static inline int enqcmds(void __iomem *dst, const void *src) return 0; } +static inline void tile_release(void) +{ + /* + * Instruction opcode for TILERELEASE; supported in binutils + * version >= 2.36. + */ + asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0"); +} + #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 8dea01ffc5c1..3507609e22d7 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -847,3 +847,16 @@ int fpu__exception_code(struct fpu *fpu, int trap_nr) */ return 0; } + +/* + * Initialize register state that may prevent from entering low-power idle. + * This function will be invoked from the cpuidle driver only when needed. + */ +void fpu_idle_fpregs(void) +{ + if (cpu_feature_enabled(X86_FEATURE_XGETBV1) && + (xfeatures_in_use() & XFEATURE_MASK_XTILE)) { + tile_release(); + fpregs_deactivate(¤t->thread.fpu); + } +} From patchwork Fri Mar 25 02:22:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 554177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D3D0C433EF for ; Fri, 25 Mar 2022 02:30:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242818AbiCYCcL (ORCPT ); Thu, 24 Mar 2022 22:32:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357737AbiCYCcK (ORCPT ); Thu, 24 Mar 2022 22:32:10 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CEC3B6E45; Thu, 24 Mar 2022 19:30:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648175438; x=1679711438; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=yQjfEUUI7oVHVPerk9SvpBNm5Nl0BnDkY26G7WSlbXw=; b=NAh4pJT+zpDF5T81V4rXFYEKlrC+cyXZUdHEu4vZf2yWqwdvJ3kRORi4 BtX1iJEEs4Ig7NfYZ3hrevAuL1c9WlUibEDC4NuDx6hP/va00lzmDVxVb aIsA5QAaMgIL2gfnlm6l1T05vL0dNr2hiRdfG7vn4v8K+zZYMmcDb9xo2 cCyhexQudwaLPRTZeXsT0IEGnM0kbizc76PNs2o8uqcuMl8NFN2iUqP7E fVJpF8jdQL1Yyij886LvPD1J3q86v51FdIZPxsMBXuaJSESU88NnRbwlw TZl+Xrpk5h3vRdUDASPdyHDFo8wZ6tOlIsYBqK5/QQQugwOLmPbJ8X5gq w==; X-IronPort-AV: E=McAfee;i="6200,9189,10296"; a="321733684" X-IronPort-AV: E=Sophos;i="5.90,209,1643702400"; d="scan'208";a="321733684" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2022 19:30:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,209,1643702400"; d="scan'208";a="693531285" Received: from chang-linux-3.sc.intel.com ([172.25.112.114]) by fmsmga001.fm.intel.com with ESMTP; 24 Mar 2022 19:30:35 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org Cc: tglx@linutronix.de, dave.hansen@linux.intel.com, peterz@infradead.org, bp@alien8.de, rafael@kernel.org, ravi.v.shankar@intel.com, chang.seok.bae@intel.com, Artem Bityutskiy Subject: [PATCH v3 3/3] intel_idle: Add a new flag to initialize the AMX state Date: Thu, 24 Mar 2022 19:22:19 -0700 Message-Id: <20220325022219.829-4-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220325022219.829-1-chang.seok.bae@intel.com> References: <20220325022219.829-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The non-initialized AMX state can be the cause of C-state demotion from C6 to C1E. This low-power idle state may improve power savings and thus result in a higher available turbo frequency budget. This behavior is implementation-specific. Initialize the state for the C6 entrance of Sapphire Rapids as needed. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Chang S. Bae Tested-by : Zhang Rui Cc: Artem Bityutskiy Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org Acked-by: Rafael J. Wysocki --- Changes from v2: * Remove an unnecessary backslash (Rafael Wysocki). Changes from v1: * Simplify the code with a new flag (Rui). * Rebase on Artem's patches for SPR intel_idle. * Massage the changelog. --- drivers/idle/intel_idle.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index b7640cfe0020..d35790890a3f 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -54,6 +54,7 @@ #include #include #include +#include #define INTEL_IDLE_VERSION "0.5.1" @@ -100,6 +101,11 @@ static unsigned int mwait_substates __initdata; */ #define CPUIDLE_FLAG_ALWAYS_ENABLE BIT(15) +/* + * Initialize large xstate for the C6-state entrance. + */ +#define CPUIDLE_FLAG_INIT_XSTATE BIT(16) + /* * MWAIT takes an 8-bit "hint" in EAX "suggesting" * the C-state (top nibble) and sub-state (bottom nibble) @@ -134,6 +140,9 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev, if (state->flags & CPUIDLE_FLAG_IRQ_ENABLE) local_irq_enable(); + if (state->flags & CPUIDLE_FLAG_INIT_XSTATE) + fpu_idle_fpregs(); + mwait_idle_with_hints(eax, ecx); return index; @@ -154,8 +163,12 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev, static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev, struct cpuidle_driver *drv, int index) { - unsigned long eax = flg2MWAIT(drv->states[index].flags); unsigned long ecx = 1; /* break on interrupt flag */ + struct cpuidle_state *state = &drv->states[index]; + unsigned long eax = flg2MWAIT(state->flags); + + if (state->flags & CPUIDLE_FLAG_INIT_XSTATE) + fpu_idle_fpregs(); mwait_idle_with_hints(eax, ecx); @@ -790,7 +803,8 @@ static struct cpuidle_state spr_cstates[] __initdata = { { .name = "C6", .desc = "MWAIT 0x20", - .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | + CPUIDLE_FLAG_INIT_XSTATE, .exit_latency = 290, .target_residency = 800, .enter = &intel_idle,