From patchwork Wed Jan 22 11:25:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 23504 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-gg0-f198.google.com (mail-gg0-f198.google.com [209.85.161.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EFD2E203C6 for ; Wed, 22 Jan 2014 11:29:04 +0000 (UTC) Received: by mail-gg0-f198.google.com with SMTP id x14sf14523361ggx.1 for ; Wed, 22 Jan 2014 03:29:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type; bh=xQBdFZtE0cMA/GURoy4JqUTga8fYy2dw79CBVhwzHCI=; b=eMNQ6PGhmCRzv0PTUmp0sn4dO7Ht/qmKRUUjLzLWlSEibFu8w7Chitet5bICScmHgk /emEyZsoiv0zR8kRQg4e351w7Ekf5t4pgPm6YIM+7em43gnZEvlkR6bcA1a7tBkLAsh4 EUBZGdqot7YObBDByEvCzBKEV5WhtH4eOA1jTgmNToOk4gcIiS1REUaPgu5+1JwPkvaL BpdleZ0TQjJc7VnDvF2Fj6nPlYNnXJ50626XMq7RoDCxjSVzLWfoUDXsUG0UiuHsSuek FGP2eovFAMgbKEIwmASYZ9fDD7igWdd/RoDiYNEJgdRQQ8bWf62kVqWduhg1wZZod2iK CDQA== X-Gm-Message-State: ALoCoQk1dJjTM3dTesVsgOQWflc70vU9K/E5PZIRg6zTNYn9R4XUBePiD2tCAwiaW10wnlArY1um X-Received: by 10.236.121.195 with SMTP id r43mr291090yhh.44.1390390144138; Wed, 22 Jan 2014 03:29:04 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.122.101 with SMTP id lr5ls33289qeb.82.gmail; Wed, 22 Jan 2014 03:29:04 -0800 (PST) X-Received: by 10.221.3.70 with SMTP id nx6mr67747vcb.45.1390390144014; Wed, 22 Jan 2014 03:29:04 -0800 (PST) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx.google.com with ESMTPS id vx3si4351755vcb.103.2014.01.22.03.29.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 22 Jan 2014 03:29:04 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.182; Received: by mail-vc0-f182.google.com with SMTP id id10so133393vcb.41 for ; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Received: by 10.220.170.68 with SMTP id c4mr70651vcz.41.1390390143925; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp178102vcz; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Received: by 10.68.130.130 with SMTP id oe2mr927093pbb.135.1390390142970; Wed, 22 Jan 2014 03:29:02 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id xu6si9428578pab.341.2014.01.22.03.29.02; Wed, 22 Jan 2014 03:29:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755330AbaAVL2z (ORCPT + 27 others); Wed, 22 Jan 2014 06:28:55 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:44564 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754176AbaAVL2q (ORCPT ); Wed, 22 Jan 2014 06:28:46 -0500 Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id AJQ35807; Wed, 22 Jan 2014 19:28:34 +0800 (CST) Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 22 Jan 2014 19:28:31 +0800 Received: from LGGEML424-HUB.china.huawei.com (10.72.61.124) by szxeml423-hub.china.huawei.com (10.82.67.162) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 22 Jan 2014 19:28:32 +0800 Received: from kernel-host.huawei (10.107.197.247) by lggeml424-hub.china.huawei.com (10.72.61.124) with Microsoft SMTP Server id 14.3.158.1; Wed, 22 Jan 2014 19:28:16 +0800 From: Wang Nan To: CC: Eric Biederman , Russell King , Andrew Morton , Geng Hui , , , , Wang Nan , Subject: [PATCH 2/3] ARM: kexec: copying code to ioremapped area Date: Wed, 22 Jan 2014 19:25:15 +0800 Message-ID: <1390389916-8711-3-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1390389916-8711-1-git-send-email-wangnan0@huawei.com> References: <1390389916-8711-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: wangnan0@huawei.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , ARM's kdump is actually corrupted (at least for omap4460), mainly because of cache problem: flush_icache_range can't reliably ensure the copied data correctly goes into RAM. After mmu turned off and jump to the trampoline, kexec always failed due to random undef instructions. This patch use ioremap to make sure the destnation of all memcpy() is uncachable memory, including copying of target kernel and trampoline. Signed-off-by: Wang Nan Cc: # 3.4+ Cc: Eric Biederman Cc: Russell King Cc: Andrew Morton Cc: Geng Hui --- arch/arm/kernel/machine_kexec.c | 18 ++++++++++++++++-- kernel/kexec.c | 40 +++++++++++++++++++++++++++++++++++----- 2 files changed, 51 insertions(+), 7 deletions(-) diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c index f0d180d..ba0a5a8 100644 --- a/arch/arm/kernel/machine_kexec.c +++ b/arch/arm/kernel/machine_kexec.c @@ -144,6 +144,7 @@ void machine_kexec(struct kimage *image) unsigned long page_list; unsigned long reboot_code_buffer_phys; unsigned long reboot_entry = (unsigned long)relocate_new_kernel; + void __iomem *reboot_entry_remap; unsigned long reboot_entry_phys; void *reboot_code_buffer; @@ -171,9 +172,22 @@ void machine_kexec(struct kimage *image) /* copy our kernel relocation code to the control code page */ - reboot_entry = fncpy(reboot_code_buffer, - reboot_entry, + reboot_entry_remap = ioremap_nocache(reboot_code_buffer_phys, + relocate_new_kernel_size); + if (reboot_entry_remap == NULL) { + pr_warn("startup code may not be reliably flushed\n"); + reboot_entry_remap = (void __iomem *)reboot_code_buffer; + } + + reboot_entry = fncpy(reboot_entry_remap, reboot_entry, relocate_new_kernel_size); + reboot_entry = (unsigned long)reboot_code_buffer + + (reboot_entry - + (unsigned long)reboot_entry_remap); + + if (reboot_entry_remap != reboot_code_buffer) + iounmap(reboot_entry_remap); + reboot_entry_phys = (unsigned long)reboot_entry + (reboot_code_buffer_phys - (unsigned long)reboot_code_buffer); diff --git a/kernel/kexec.c b/kernel/kexec.c index 9c97016..3e92999 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -806,6 +806,7 @@ static int kimage_load_normal_segment(struct kimage *image, while (mbytes) { struct page *page; char *ptr; + void __iomem *ioptr; size_t uchunk, mchunk; page = kimage_alloc_page(image, GFP_HIGHUSER, maddr); @@ -818,7 +819,17 @@ static int kimage_load_normal_segment(struct kimage *image, if (result < 0) goto out; - ptr = kmap(page); + /* + * Try ioremap to make sure the copied data goes into RAM + * reliably. If failed (some archs don't allow ioremap RAM), + * use kmap instead. + */ + ioptr = ioremap(page_to_pfn(page) << PAGE_SHIFT, + PAGE_SIZE); + if (ioptr != NULL) + ptr = ioptr; + else + ptr = kmap(page); /* Start with a clear page */ clear_page(ptr); ptr += maddr & ~PAGE_MASK; @@ -827,7 +838,10 @@ static int kimage_load_normal_segment(struct kimage *image, uchunk = min(ubytes, mchunk); result = copy_from_user(ptr, buf, uchunk); - kunmap(page); + if (ioptr != NULL) + iounmap(ioptr); + else + kunmap(page); if (result) { result = -EFAULT; goto out; @@ -846,7 +860,7 @@ static int kimage_load_crash_segment(struct kimage *image, { /* For crash dumps kernels we simply copy the data from * user space to it's destination. - * We do things a page at a time for the sake of kmap. + * We do things a page at a time for the sake of ioremap/kmap. */ unsigned long maddr; size_t ubytes, mbytes; @@ -861,6 +875,7 @@ static int kimage_load_crash_segment(struct kimage *image, while (mbytes) { struct page *page; char *ptr; + void __iomem *ioptr; size_t uchunk, mchunk; page = pfn_to_page(maddr >> PAGE_SHIFT); @@ -868,7 +883,18 @@ static int kimage_load_crash_segment(struct kimage *image, result = -ENOMEM; goto out; } - ptr = kmap(page); + /* + * Try ioremap to make sure the copied data goes into RAM + * reliably. If failed (some archs don't allow ioremap RAM), + * use kmap instead. + */ + ioptr = ioremap_nocache(page_to_pfn(page) << PAGE_SHIFT, + PAGE_SIZE); + if (ioptr != NULL) + ptr = ioptr; + else + ptr = kmap(page); + ptr += maddr & ~PAGE_MASK; mchunk = min_t(size_t, mbytes, PAGE_SIZE - (maddr & ~PAGE_MASK)); @@ -879,7 +905,11 @@ static int kimage_load_crash_segment(struct kimage *image, } result = copy_from_user(ptr, buf, uchunk); kexec_flush_icache_page(page); - kunmap(page); + if (ioptr != NULL) + iounmap(ioptr); + else + kunmap(page); + if (result) { result = -EFAULT; goto out;