From patchwork Wed Mar 18 14:55:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 45970 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 49CF62153B for ; Wed, 18 Mar 2015 15:43:08 +0000 (UTC) Received: by lbvp9 with SMTP id p9sf8071769lbv.0 for ; Wed, 18 Mar 2015 08:43:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=lBwHTPwqlpOC9Y+MsovZu7FjR4iPG7/PkeigEhmfroY=; b=lgrQre/rWF8U38EnWd6wndNIA4FBZTMy/hu3eVwmjLM4GwNj5OFfr3wgcex0pwsoBn sIhtICUxlSTT3cn2HBmwbCUYQRvxTkkjc0SXJaE1Jm/UqCbiG7D56xSTQShllAuEtNXO hopAdvbPalnX2uZut1aVOVQjFnt8/swfmqB2vNkrldMrXUSsgJusXANMvmc9AFstCero gXtmHz45DC4mu68S5fDc1/5r8w9tQ+BceIraXe0sgGFuNKfCCdzY+KAEUQ0zOI5uQxlY TdL4jWdeLxETHtj/WCpIEPtdM+PAGGWXN5FRAQma6z/d5ZJKzKP0+9qqCEgDw32Cm9Ce 66hw== X-Gm-Message-State: ALoCoQno6zDe3PSSTHpj6r5Z0MKSqnk2G1934xCuQlAjl7LnXRjmiKEHI4llSfE5aC/WHqWg6Ob6 X-Received: by 10.180.9.228 with SMTP id d4mr518972wib.1.1426693386991; Wed, 18 Mar 2015 08:43:06 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.45.98 with SMTP id l2ls154906lam.28.gmail; Wed, 18 Mar 2015 08:43:06 -0700 (PDT) X-Received: by 10.112.50.38 with SMTP id z6mr43923475lbn.122.1426693386476; Wed, 18 Mar 2015 08:43:06 -0700 (PDT) Received: from mail-la0-f52.google.com (mail-la0-f52.google.com. [209.85.215.52]) by mx.google.com with ESMTPS id b8si13179758lbs.16.2015.03.18.08.43.06 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Mar 2015 08:43:06 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.52 as permitted sender) client-ip=209.85.215.52; Received: by lamx15 with SMTP id x15so39515803lam.3 for ; Wed, 18 Mar 2015 08:43:06 -0700 (PDT) X-Received: by 10.152.30.103 with SMTP id r7mr61046825lah.76.1426693386068; Wed, 18 Mar 2015 08:43:06 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1237204lbj; Wed, 18 Mar 2015 08:43:04 -0700 (PDT) X-Received: by 10.70.36.48 with SMTP id n16mr43336507pdj.82.1426693383729; Wed, 18 Mar 2015 08:43:03 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id uh9si36719512pac.188.2015.03.18.08.43.02 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Mar 2015 08:43:03 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YYG5q-0007ig-8X; Wed, 18 Mar 2015 15:41:14 +0000 Received: from mail-wg0-f54.google.com ([74.125.82.54]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YYFOG-0003Ui-17 for linux-arm-kernel@lists.infradead.org; Wed, 18 Mar 2015 14:56:13 +0000 Received: by wgra20 with SMTP id a20so37494655wgr.3 for ; Wed, 18 Mar 2015 07:55:49 -0700 (PDT) X-Received: by 10.180.72.98 with SMTP id c2mr7514983wiv.87.1426690549893; Wed, 18 Mar 2015 07:55:49 -0700 (PDT) Received: from ards-macbook-pro.local ([84.78.25.113]) by mx.google.com with ESMTPSA id y14sm24826582wjr.39.2015.03.18.07.55.46 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 18 Mar 2015 07:55:49 -0700 (PDT) From: Ard Biesheuvel To: will.deacon@arm.com, mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 6/8] arm64: merge __enable_mmu and __turn_mmu_on Date: Wed, 18 Mar 2015 15:55:25 +0100 Message-Id: <1426690527-14258-7-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1426690527-14258-1-git-send-email-ard.biesheuvel@linaro.org> References: <1426690527-14258-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150318_075612_314993_CFE81BA4 X-CRM114-Status: GOOD ( 14.13 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.54 listed in wl.mailspike.net] -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.54 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: marc.zyngier@arm.com, Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.52 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Enabling of the MMU is split into two functions, with an align and a branch in the middle. On arm64, the entire kernel Image is ID mapped so this is really not necessary, and we can just merge it into a single function. Also replaces an open coded adrp/add reference to __enable_mmu with adr_l. Reviewed-by: Mark Rutland Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 33 +++++++-------------------------- 1 file changed, 7 insertions(+), 26 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 65c7de889c8c..9a2554558e5b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -255,8 +255,7 @@ ENTRY(stext) */ ldr x27, =__mmap_switched // address to jump to after // MMU has been enabled - adrp lr, __enable_mmu // return (PIC) address - add lr, lr, #:lo12:__enable_mmu + adr_l lr, __enable_mmu // return (PIC) address b __cpu_setup // initialise processor ENDPROC(stext) @@ -615,11 +614,12 @@ ENDPROC(__secondary_switched) #endif /* CONFIG_SMP */ /* - * Setup common bits before finally enabling the MMU. Essentially this is just - * loading the page table pointer and vector base registers. + * Enable the MMU. * - * On entry to this code, x0 must contain the SCTLR_EL1 value for turning on - * the MMU. + * x0 = SCTLR_EL1 value for turning on the MMU. + * x27 = *virtual* address to jump to upon completion + * + * other registers depend on the function called upon completion */ __enable_mmu: ldr x5, =vectors @@ -627,29 +627,10 @@ __enable_mmu: msr ttbr0_el1, x25 // load TTBR0 msr ttbr1_el1, x26 // load TTBR1 isb - b __turn_mmu_on -ENDPROC(__enable_mmu) - -/* - * Enable the MMU. This completely changes the structure of the visible memory - * space. You will not be able to trace execution through this. - * - * x0 = system control register - * x27 = *virtual* address to jump to upon completion - * - * other registers depend on the function called upon completion - * - * We align the entire function to the smallest power of two larger than it to - * ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET - * close to the end of a 512MB or 1GB block we might require an additional - * table to map the entire function. - */ - .align 4 -__turn_mmu_on: msr sctlr_el1, x0 isb br x27 -ENDPROC(__turn_mmu_on) +ENDPROC(__enable_mmu) /* * Calculate the start of physical memory.