From patchwork Sat Jun 26 14:22:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 467480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90881C49EA7 for ; Sat, 26 Jun 2021 14:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B6CB61C3E for ; Sat, 26 Jun 2021 14:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230119AbhFZOY4 (ORCPT ); Sat, 26 Jun 2021 10:24:56 -0400 Received: from mga03.intel.com ([134.134.136.65]:10776 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbhFZOYw (ORCPT ); Sat, 26 Jun 2021 10:24:52 -0400 IronPort-SDR: xqP8DXa8XQHivQ1ff/kU2HkCuSP6JDK63grjuuVDXdAw7CUwqETazYa2qsDsEj4jgieDGBtEwP plRD1uHZ8cBg== X-IronPort-AV: E=McAfee;i="6200,9189,10027"; a="207828227" X-IronPort-AV: E=Sophos;i="5.83,301,1616482800"; d="scan'208";a="207828227" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2021 07:22:28 -0700 X-IronPort-AV: E=Sophos;i="5.83,301,1616482800"; d="scan'208";a="640382648" Received: from mlubyani-mobl2.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.8.25]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2021 07:22:28 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, "Rafael J . Wysocki" , linux-acpi@vger.kernel.org, "Rafael J . Wysocki" Subject: [PATCH v2 5/5] x86: Skip WBINVD instruction for VM guest Date: Sat, 26 Jun 2021 07:22:13 -0700 Message-Id: <178da14b6e77c7ab87ba5fd8ad121ab10dcae2e3.1624666915.git.sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org VM guests that supports ACPI, use standard ACPI mechanisms to signal sleep state entry (including reboot) to the host. The ACPI specification mandates WBINVD on any sleep state entry with the expectation that the platform is only responsible for maintaining the state of memory over sleep states, not preserving dirty data in any CPU caches. ACPI cache flushing requirements pre-date the advent of virtualization. Given guest sleep state entry does not affect any host power rails it is not required to flush caches. The host is responsible for maintaining cache state over its own bare metal sleep state transitions that power-off the cache. A TDX guest, unlike a typical guest, will machine check if the CPU cache is powered off. Cc: Rafael J. Wysocki Cc: linux-acpi@vger.kernel.org Reviewed-by: Dan Williams Acked-by: Rafael J. Wysocki Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/acenv.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h index 9aff97f0de7f..d4162e94bee8 100644 --- a/arch/x86/include/asm/acenv.h +++ b/arch/x86/include/asm/acenv.h @@ -10,10 +10,15 @@ #define _ASM_X86_ACENV_H #include +#include /* Asm macros */ -#define ACPI_FLUSH_CPU_CACHE() wbinvd() +#define ACPI_FLUSH_CPU_CACHE() \ +do { \ + if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) \ + wbinvd(); \ +} while (0) int __acpi_acquire_global_lock(unsigned int *lock); int __acpi_release_global_lock(unsigned int *lock);