From patchwork Thu Nov 29 17:12:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 152416 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2634066ljp; Thu, 29 Nov 2018 09:12:59 -0800 (PST) X-Google-Smtp-Source: AFSGD/XVBAgauMzLDoRp/9GRy6XVtjSdkQCuInd9nnpdfHCaQ2M8x1GmblQVJPouwTxtY36yIc4E X-Received: by 2002:a63:f047:: with SMTP id s7mr1889338pgj.441.1543511579126; Thu, 29 Nov 2018 09:12:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543511579; cv=none; d=google.com; s=arc-20160816; b=Ns9Fa6zBOuH6TQbo92Y++Uwu8c9TbH0wAAVbXv7ODPGnYoBCn3gO6w/6u7Qlps6LUE 3wG/dfOshmkYNCRAy0CyeREIHBo5aj5vO8CcxLOGwofz7fKK4fWD+2lIA4M6bL8lhHbz XcyejbAdQ5BL4y5SMgfH48RK3VgyNTKQhgjA3T7KnLGRSce6RxwDU79jQqgfvVx00dEf 7gxj1hGQlWFL5CVK2vV3nz27XdrP7Ni8fm4P4xl0pPWN9Rr/SVl1tOzSv7uq2M4YKx/v pSlPCSUIdJLHBiNd+YKcLNTLJlORf3B++uCE0xBpxFZbM6vs6JOuUn0gTSjxooUk3Ohr KuIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9iTHNAMi/B7M59UNr1H/N7bE0xz4z3gfTlAxn3XU6zk=; b=OveSWujTVhiOW5BaNJD2t6J7GdE7N8OkSCVpLrGyhRfKkzGK5e/sZR7Wds5RPfPA20 V19BLl3TBSzWY8nLvDSDdXCa8Nu11K2Ys/WnggoM4u8zL8rW558K0Ie6wetpqniFx6EX 26C1Ph3pj5Px9qsc5aOPhm06WbBMZIENrntS3RsaOCQovK2CThdmW24NRIbdxvweg+9+ E2L4sAXoyXqXnJSaHqoLjqTy/t8z1YgMYDpp5KON9+UwQ2XzaBFEjhBRvHFm3HX16LJs 44FOiHm8X9vWoU2it/grf6n4O73AD9nAWiIibA7B7ILNsUmPrMBOE93WG0dUlp+lFgLC FHkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IggHAJei; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 2si2476101pgz.395.2018.11.29.09.12.58; Thu, 29 Nov 2018 09:12:59 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IggHAJei; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730535AbeK3ETB (ORCPT + 32 others); Thu, 29 Nov 2018 23:19:01 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:35048 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728554AbeK3ETB (ORCPT ); Thu, 29 Nov 2018 23:19:01 -0500 Received: by mail-wr1-f65.google.com with SMTP id 96so2696142wrb.2 for ; Thu, 29 Nov 2018 09:12:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9iTHNAMi/B7M59UNr1H/N7bE0xz4z3gfTlAxn3XU6zk=; b=IggHAJei/HY1eekMT9MKvSsKn2ZhjsFwMEfCojcv0lIodqCRBUHbdO8IqfqEPCge5c XqAVKSQE6Jc7+/UASrkQRlFwrIyPAziUKerwEFvHVV/ff5prnWM9DOFcWDDch0Xu7hlb PvTEJ6tDpRrmrFGOtTXlBs3p87E9yD9WX8CGw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9iTHNAMi/B7M59UNr1H/N7bE0xz4z3gfTlAxn3XU6zk=; b=IFPPzr+WoiYPpuel8giaExXcONCLYWWAi48CdhVsbNmj+n5pHCvK4+cNSQVnTt31Cp Ycv2MBpdLe0zArQeSMhCwfRbOSG1lAbxCAeEaLR7pFn55LyxLBgmoKLo+9EG0a4D+CxC Xp1VQNNF44gHSDTBmquwv/86ze6/H7jESvakZTB9UgJ6MJXjedqx2ZxgY0GBQQeQzlo6 nITG5QlFSarNcIlEgl+MRv3eVD/Qo9u92cQn1y4kHEWnwbR4rtzObbOEkZLpWeF5yS37 B55Kd3ty2QoUxWcmbkTfFgn706Kx5K8E/ESJIDZOQq5HZd64RzZHIq+eegpB3BeJ57m9 EWUQ== X-Gm-Message-State: AA+aEWYa4mbP2GVZod5P5r4YNBo9w3txoM/vNfKIxg3jOSnWUaSQLjyr Jaf5/YWqzrOer5fCDvKvBp2jSg== X-Received: by 2002:adf:9ec8:: with SMTP id b8mr2256654wrf.164.1543511574762; Thu, 29 Nov 2018 09:12:54 -0800 (PST) Received: from harold.home ([2a01:cb1d:112:6f00:f070:d240:312e:9f99]) by smtp.gmail.com with ESMTPSA id y185sm1593882wmg.34.2018.11.29.09.12.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Nov 2018 09:12:53 -0800 (PST) From: Ard Biesheuvel To: linux-efi@vger.kernel.org, Ingo Molnar , Thomas Gleixner Cc: Ard Biesheuvel , linux-kernel@vger.kernel.org, Andy Lutomirski , Arend van Spriel , Bhupesh Sharma , Borislav Petkov , Dave Hansen , Eric Snowberg , Hans de Goede , Joe Perches , Jon Hunter , Julien Thierry , Marc Zyngier , Nathan Chancellor , Peter Zijlstra , Sai Praneeth Prakhya , Sedat Dilek , YiFei Zhu Subject: [PATCH 04/11] x86/mm/pageattr: Introduce helper function to unmap EFI boot services Date: Thu, 29 Nov 2018 18:12:23 +0100 Message-Id: <20181129171230.18699-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181129171230.18699-1-ard.biesheuvel@linaro.org> References: <20181129171230.18699-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sai Praneeth Prakhya Ideally, after kernel assumes control of the platform, firmware shouldn't access EFI boot services code/data regions. But, it's noticed that this is not so true in many x86 platforms. Hence, during boot, kernel reserves EFI boot services code/data regions [1] and maps [2] them to efi_pgd so that call to set_virtual_address_map() doesn't fail. After returning from set_virtual_address_map(), kernel frees the reserved regions [3] but they still remain mapped. Hence, introduce kernel_unmap_pages_in_pgd() which will later be used to unmap EFI boot services code/data regions. While at it modify kernel_map_pages_in_pgd() by 1. Adding __init modifier because it's always used *only* during boot. 2. Add a warning if it's used after SMP is initialized because it uses __flush_tlb_all() which flushes mappings only on current CPU. Unmapping EFI boot services code/data regions will result in clearing PAGE_PRESENT bit and it shouldn't bother L1TF cases because it's already handled by protnone_mask() at arch/x86/include/asm/pgtable-invert.h. [1] efi_reserve_boot_services() [2] efi_map_region() -> __map_region() -> kernel_map_pages_in_pgd() [3] efi_free_boot_services() Signed-off-by: Sai Praneeth Prakhya Cc: Borislav Petkov Cc: Ingo Molnar Cc: Andy Lutomirski Cc: Dave Hansen Cc: Bhupesh Sharma Cc: Peter Zijlstra Reviewed-by: Thomas Gleixner Signed-off-by: Ard Biesheuvel --- arch/x86/include/asm/pgtable_types.h | 8 ++++-- arch/x86/mm/pageattr.c | 40 ++++++++++++++++++++++++++-- 2 files changed, 44 insertions(+), 4 deletions(-) -- 2.19.1 diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 106b7d0e2dae..d6ff0bbdb394 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -564,8 +564,12 @@ extern pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, unsigned int *level); extern pmd_t *lookup_pmd_address(unsigned long address); extern phys_addr_t slow_virt_to_phys(void *__address); -extern int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, - unsigned numpages, unsigned long page_flags); +extern int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, + unsigned long address, + unsigned numpages, + unsigned long page_flags); +extern int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address, + unsigned long numpages); #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_PGTABLE_DEFS_H */ diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index db7a10082238..bac35001d896 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2338,8 +2338,8 @@ bool kernel_page_present(struct page *page) #endif /* CONFIG_DEBUG_PAGEALLOC */ -int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, - unsigned numpages, unsigned long page_flags) +int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, + unsigned numpages, unsigned long page_flags) { int retval = -EINVAL; @@ -2353,6 +2353,8 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, .flags = 0, }; + WARN_ONCE(num_online_cpus() > 1, "Don't call after initializing SMP"); + if (!(__supported_pte_mask & _PAGE_NX)) goto out; @@ -2374,6 +2376,40 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, return retval; } +/* + * __flush_tlb_all() flushes mappings only on current CPU and hence this + * function shouldn't be used in an SMP environment. Presently, it's used only + * during boot (way before smp_init()) by EFI subsystem and hence is ok. + */ +int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address, + unsigned long numpages) +{ + int retval; + + /* + * The typical sequence for unmapping is to find a pte through + * lookup_address_in_pgd() (ideally, it should never return NULL because + * the address is already mapped) and change it's protections. As pfn is + * the *target* of a mapping, it's not useful while unmapping. + */ + struct cpa_data cpa = { + .vaddr = &address, + .pfn = 0, + .pgd = pgd, + .numpages = numpages, + .mask_set = __pgprot(0), + .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW), + .flags = 0, + }; + + WARN_ONCE(num_online_cpus() > 1, "Don't call after initializing SMP"); + + retval = __change_page_attr_set_clr(&cpa, 0); + __flush_tlb_all(); + + return retval; +} + /* * The testcases use internal knowledge of the implementation that shouldn't * be exposed to the rest of the kernel. Include these directly here.