From patchwork Wed Mar 23 20:41:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 64275 Delivered-To: patch@linaro.org Received: by 10.112.199.169 with SMTP id jl9csp258622lbc; Wed, 23 Mar 2016 13:41:32 -0700 (PDT) X-Received: by 10.98.18.212 with SMTP id 81mr7194715pfs.104.1458765692661; Wed, 23 Mar 2016 13:41:32 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id hx6si6612416pac.95.2016.03.23.13.41.32; Wed, 23 Mar 2016 13:41:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756778AbcCWUla (ORCPT + 29 others); Wed, 23 Mar 2016 16:41:30 -0400 Received: from mail-io0-f181.google.com ([209.85.223.181]:35481 "EHLO mail-io0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752799AbcCWUl3 (ORCPT ); Wed, 23 Mar 2016 16:41:29 -0400 Received: by mail-io0-f181.google.com with SMTP id v187so34541597ioe.2 for ; Wed, 23 Mar 2016 13:41:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=t7VBXB3zlNqOXAnh0j/9DkDEQjrGmU6PDIHsPd4bvbw=; b=GrUQ9Z/FBLSzO5JnQlmOYgbF9pcQ9F/3V0lZTu0DepoyNCICV+KM8BcF1ehBy7xpir QF9WrN81SyA+pOfugj6E/fcIL4SWnig8gfUxHjtOYBKqQuqmPo8zd1flfxNhWTM0KF1K jKH8+DHKyAwZmuGER/PM89/PQLgopYHgBhYkA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=t7VBXB3zlNqOXAnh0j/9DkDEQjrGmU6PDIHsPd4bvbw=; b=YGAKPN5zp81TTDe3qBl8ltDbHeKo2aS9ezRIEq38QKzhL1Sn0Ry4U5oR/ErCfi8bZi yMAFHGFmLtE23O29O54adrmqmPKsq0eEprNiEs3LXIXFRib5/6gufsw37CHFQA5L8OKA GyzyJBwFBwiuszVL3gQMHacQTCGOojxVezxXFJMt6zMX4mvtZ8WWk55K2dpxgtrWBJ64 pOEaXLQ+6dwDpUA13PpQD5T067P8RjI5NOxcHppLrUa7eZRZG26AynG/ruHv7ye6QyX+ OrHe3DIh472p7kEPlf71x9n9cMu70TsARaZ4ICVxAcWsjStUoXOZxoJ6HLw6e2mdTRIz AofA== X-Gm-Message-State: AD7BkJItkh/j2pe9cZOA/8gDZyqL5yshosKSCVFX2k42pP2Bs2N0LIPQjqosvhkahVWEvqfBc8b+jI160ZWNgrXZ MIME-Version: 1.0 X-Received: by 10.50.78.130 with SMTP id b2mr25090875igx.71.1458765688380; Wed, 23 Mar 2016 13:41:28 -0700 (PDT) Received: by 10.36.29.6 with HTTP; Wed, 23 Mar 2016 13:41:28 -0700 (PDT) In-Reply-To: <1458762438.23434.120.camel@redhat.com> References: <1454373031-24218-1-git-send-email-msalter@redhat.com> <1458762438.23434.120.camel@redhat.com> Date: Wed, 23 Mar 2016 21:41:28 +0100 Message-ID: Subject: Re: [PATCH] arm64: handle unmapped pages in initrd relocation From: Ard Biesheuvel To: Mark Salter Cc: Catalin Marinas , Will Deacon , Mark Langsdorf , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 23 March 2016 at 20:47, Mark Salter wrote: > On Mon, 2016-02-01 at 19:30 -0500, Mark Salter wrote: >> Commit 4dffbfc48d65 ("arm64/efi: mark UEFI reserved regions as >> MEMBLOCK_NOMAP") causes a potential problem in arm64 initrd relocation >> code. If the kernel uses a pagesize greater than the 4k pagesize used >> by UEFI, pagesize rounding may lead to one or both ends of the initrd >> image to be marked unmapped. This leads to a panic when the kernel goes >> to unpack it. This patch looks for unmapped pages at beginning and end >> of the initrd image and if seen, relocated the initrd to a new area >> completely covered by the kernel linear map. >> >> Signed-off-by: Mark Salter >> Cc: Ard Biesheuvel >> --- > > The Fedora folks have run into this problem with a certain kernel build. What ever > happened to Ard's suggested fix. The MEMBLOCK_NOMAP patch caused a regression which > should be fixed. Whether this patch, Ard's patch, or something else. > > https://bugzilla.redhat.com/show_bug.cgi?id=1309147 > As I mentioned before, reverting the MEMBLOCK_NOMAP patch will break ARM. However, I have some patches in flight [1] that I expect Russell to pick up for v4.7, which will make memremap() work as expected on ARM. So for the v4.7 timeframe, we can fix it properly, by switching to the now fully functional memremap() for mapping the UEFI memory map, which means we can revert the MEMBLOCK_NOMAP change since memremap() doesn't care about that (unlike ioremap_cache(), which disallows mapping normal memory on ARM) That does not fix the breakage, though. Since the issue only occurs on 64k pages kernels, which implies arm64, we can perhaps apply the patch below, and revert to using memblock_reserve() in all cases once [1] is merged. [1] http://thread.gmane.org/gmane.linux.ports.arm.kernel/484005 -------8<---------- diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c index 7c5a38c60037..95303d19fff5 100644 --- a/drivers/firmware/efi/arm-init.c +++ b/drivers/firmware/efi/arm-init.c @@ -240,9 +240,25 @@ void __init efi_init(void) reserve_regions(); early_memunmap(memmap.map, params.mmap_size); - memblock_mark_nomap(params.mmap & PAGE_MASK, - PAGE_ALIGN(params.mmap_size + - (params.mmap & ~PAGE_MASK))); + + /* + * On 64k pages kernels, marking the memory map as MEMBLOCK_NOMAP may + * cause adjacent allocations sharing the same 64k page frame to be + * removed from the linear mapping as well. If this happens to cover + * the initrd allocation performed by GRUB (which, unlike the stub, does + * not align its EFI_LOADER_DATA allocations to 64k), we will run into + * trouble later on, since the generic initrd code expects the initrd + * to be covered by the linear mapping. + */ + if (PAGE_SIZE > SZ_4K) { + memblock_reserve(params.mmap & PAGE_MASK, + PAGE_ALIGN(params.mmap_size + + (params.mmap & ~PAGE_MASK))); + } else { + memblock_mark_nomap(params.mmap & PAGE_MASK, + PAGE_ALIGN(params.mmap_size + + (params.mmap & ~PAGE_MASK))); + } init_screen_info(); }