From patchwork Fri Jul 18 11:07:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 33830 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f71.google.com (mail-pa0-f71.google.com [209.85.220.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7AE43208CF for ; Fri, 18 Jul 2014 11:08:09 +0000 (UTC) Received: by mail-pa0-f71.google.com with SMTP id et14sf26228186pad.6 for ; Fri, 18 Jul 2014 04:08:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:cc:subject:message-id :references:mime-version:in-reply-to:thread-topic:accept-language :user-agent:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe:content-type:content-disposition :content-language; bh=ee5yq5gri1oW9Om4Sg8wg1kxdLLO9Eo8i77A6o6oOpk=; b=D/i8TJDzrghnAqP2UslpISRZgXIP5Y1XWs6D2aqXVAt3oJU4r7hia06i07tgpDLXvO UQjaR7uYhuFSFX9n/MZR8cE6fhQ637ZjWjkPvLUiGTaluuxvpNcF1ag+KRepFDM3x6bl /DoRMGwApwXyG04KnxBekQOC4wSeB0dRp5dhqloQuI9YCx+AhjZScdGfwhhjMlLGNtqX akKEJwVUrVmY8ImGbx2eB4PdHWbW2HiD6kbRVIHMUcIFeP+JMl6kkocBFeqxwyVLom9f OeZ9azESkdYXJ6gY7L/U3HkbQ9DMR/7sSc8bBwpAXsLpilz32Uph8pmtqzHdRkS6pqrq OMeQ== X-Gm-Message-State: ALoCoQloUU1UEKXI2v8Ao2sTUuxmJ41fmgvMNKdncc3UfGPLmpZ3JFavpL3/QNGjeRS0tBx56poD X-Received: by 10.66.160.200 with SMTP id xm8mr1855895pab.13.1405681679424; Fri, 18 Jul 2014 04:07:59 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.49.1 with SMTP id p1ls947825qga.97.gmail; Fri, 18 Jul 2014 04:07:59 -0700 (PDT) X-Received: by 10.220.137.145 with SMTP id w17mr4328696vct.47.1405681679311; Fri, 18 Jul 2014 04:07:59 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id ur3si5224193vcb.43.2014.07.18.04.07.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Jul 2014 04:07:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id hu12so7093185vcb.14 for ; Fri, 18 Jul 2014 04:07:59 -0700 (PDT) X-Received: by 10.53.5.230 with SMTP id cp6mr3523372vdd.25.1405681679198; Fri, 18 Jul 2014 04:07:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp819vcb; Fri, 18 Jul 2014 04:07:58 -0700 (PDT) X-Received: by 10.68.103.98 with SMTP id fv2mr4070935pbb.18.1405681677542; Fri, 18 Jul 2014 04:07:57 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ey3si5519257pbc.244.2014.07.18.04.07.56; Fri, 18 Jul 2014 04:07:56 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934625AbaGRLHp (ORCPT + 24 others); Fri, 18 Jul 2014 07:07:45 -0400 Received: from fw-tnat.austin.arm.com ([217.140.110.23]:38744 "EHLO collaborate-mta1.arm.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933281AbaGRLHo (ORCPT ); Fri, 18 Jul 2014 07:07:44 -0400 Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.182]) by collaborate-mta1.arm.com (Postfix) with ESMTPS id 3FD7E13F626; Fri, 18 Jul 2014 06:07:33 -0500 (CDT) Date: Fri, 18 Jul 2014 12:07:18 +0100 From: Catalin Marinas To: "msalter@redhat.com" Cc: Will Deacon , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH] arm64: make CONFIG_ZONE_DMA user settable Message-ID: <20140718110718.GC19850@arm.com> References: <1403499924-11214-1-git-send-email-msalter@redhat.com> <20140623110937.GB15907@arm.com> <1403529423.755.49.camel@deneb.redhat.com> <20140624141455.GE4489@arm.com> <1403620714.755.69.camel@deneb.redhat.com> MIME-Version: 1.0 In-Reply-To: <1403620714.755.69.camel@deneb.redhat.com> Thread-Topic: [PATCH] arm64: make CONFIG_ZONE_DMA user settable Accept-Language: en-GB, en-US User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: catalin.marinas@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-Disposition: inline Content-Language: en-US On Tue, Jun 24, 2014 at 03:38:34PM +0100, Mark Salter wrote: > On Tue, 2014-06-24 at 15:14 +0100, Catalin Marinas wrote: > > On Mon, Jun 23, 2014 at 02:17:03PM +0100, Mark Salter wrote: > > > On Mon, 2014-06-23 at 12:09 +0100, Catalin Marinas wrote: > > > > My proposal (in the absence of any kind of description) is to still > > > > create a ZONE_DMA if we have DMA memory below 32-bit, otherwise just add > > > > everything (>32-bit) to ZONE_DMA. Basically an extension from your CMA > > > > patch, make dma_phys_limit static in that file and set it to > > > > memblock_end_of_DRAM() if no 32-bit DMA. Re-use it in the > > > > zone_sizes_init() function for ZONE_DMA (maybe with a pr_info for no > > > > 32-bit only DMA zone). > > > > > > There's a performance issue with all memory being in ZONE_DMA. It means > > > all normal allocations will fail on ZONE_NORMAL and then have to fall > > > back to ZONE_DMA. It would be better to put some percentage of memory > > > in ZONE_DMA. > > > > Is the performance penalty real or just theoretical? I haven't run any > > benchmarks myself. > > It is real insofar as you must eat cycles eliminating ZONE_NORMAL from > consideration in the page allocation hot path. How much that really > costs, I don't know. But it seems like it could be easily avoided by > limiting ZONE_DMA size. Is there any reason it needs to be larger than > 4GiB? Basically ZONE_DMA should allow a 32-bit dma mask. When memory starts above 4G, in the absence of an IOMMU, it is likely that 32-bit devices get some offset for the top bits to be able to address the bottom of the memory. The problem is that dma_to_phys() that early in the kernel has no idea about DMA offsets until later (they can be specified in DT per device). The patch belows tries to guess a DMA offset and use the bottom 32-bit of the DRAM as ZONE_DMA. -------8<----------------------- >From 133656f8378dbb838ad5f12ea29aa9303d7ca922 Mon Sep 17 00:00:00 2001 From: Catalin Marinas Date: Fri, 18 Jul 2014 11:54:37 +0100 Subject: [PATCH] arm64: Create non-empty ZONE_DMA when DRAM starts above 4GB ZONE_DMA is created to allow 32-bit only devices to access memory in the absence of an IOMMU. On systems where the memory starts above 4GB, it is expected that some devices have a DMA offset hardwired to be able to access the bottom of the memory. Linux currently supports DT bindings for the DMA offsets but they are not (easily) available early during boot. This patch tries to guess a DMA offset and assumes that ZONE_DMA corresponds to the 32-bit mask above the start of DRAM. Signed-off-by: Catalin Marinas Cc: Mark Salter Tested-by: Mark Salter --- arch/arm64/mm/init.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 7f68804814a1..160bbaa4fc78 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -60,6 +60,17 @@ static int __init early_initrd(char *p) early_param("initrd", early_initrd); #endif +/* + * Return the maximum physical address for ZONE_DMA (DMA_BIT_MASK(32)). It + * currently assumes that for memory starting above 4G, 32-bit devices will + * use a DMA offset. + */ +static phys_addr_t max_zone_dma_phys(void) +{ + phys_addr_t offset = memblock_start_of_DRAM() & GENMASK_ULL(63, 32); + return min(offset + (1ULL << 32), memblock_end_of_DRAM()); +} + static void __init zone_sizes_init(unsigned long min, unsigned long max) { struct memblock_region *reg; @@ -70,9 +81,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) /* 4GB maximum for 32-bit only capable devices */ if (IS_ENABLED(CONFIG_ZONE_DMA)) { - unsigned long max_dma_phys = - (unsigned long)(dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1); - max_dma = max(min, min(max, max_dma_phys >> PAGE_SHIFT)); + max_dma = PFN_DOWN(max_zone_dma_phys()); zone_size[ZONE_DMA] = max_dma - min; } zone_size[ZONE_NORMAL] = max - max_dma; @@ -142,7 +151,7 @@ void __init arm64_memblock_init(void) /* 4GB maximum for 32-bit only capable devices */ if (IS_ENABLED(CONFIG_ZONE_DMA)) - dma_phys_limit = dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1; + dma_phys_limit = max_zone_dma_phys(); dma_contiguous_reserve(dma_phys_limit); memblock_allow_resize();