From patchwork Thu Oct 6 15:49:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 77309 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp45408qge; Thu, 6 Oct 2016 08:51:00 -0700 (PDT) X-Received: by 10.98.111.195 with SMTP id k186mr9393379pfc.148.1475769059861; Thu, 06 Oct 2016 08:50:59 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cw3si12271662pad.246.2016.10.06.08.50.59; Thu, 06 Oct 2016 08:50:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964788AbcJFPuw (ORCPT + 27 others); Thu, 6 Oct 2016 11:50:52 -0400 Received: from mail-wm0-f53.google.com ([74.125.82.53]:38242 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755758AbcJFPul (ORCPT ); Thu, 6 Oct 2016 11:50:41 -0400 Received: by mail-wm0-f53.google.com with SMTP id i130so12586420wmg.1 for ; Thu, 06 Oct 2016 08:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pPShHL+A6D5U6d8F1WH4VDQC5nZgiC4H1tHYtrgqQ9E=; b=GqjQR5FkjEfdicddmDm95LnPv+n3jbs2absUffQJptMXuqnuC6j6KlcHmbFNR4kZ0W m0k72MjGTK0sSDMTZjijJJPMZ3K6TZd39vW1vG0Zo90Hoa7KqGYijNtPgSwMXvXhgjem vTrtypvUeCWo8r7juvGCfSrQQ7e9kbRlNpD3o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pPShHL+A6D5U6d8F1WH4VDQC5nZgiC4H1tHYtrgqQ9E=; b=cP4WQJ3ailhQyz5rhM1OkMk3kxmGItSN5ki+F46XUJTc+zl08BUmrFINtnp3Vj7gzO 7gEnybnwM3RnjdJB3EQzw9d3mZJxLIA8MfMvTdZAm3fU8QmPy+EgXuXnVomUe699M2+o EjqXWoeTSD6cj/U7F35gHS8SktXFJKC3w5CklxR7gj1riaAz7AbGKel6zY5+CAAROvJi BifevnAL0/Nr/7GeQ9z2Pm9206I4fxSZ5Ctpe+56/Vn83qmVPb8BjSP0PgbqtfRh6hSk 6xpAKaZ1ywUZaPp1QqVtZrkCCBkKGxPFYiqmctQlIsP4oEs8Pz6vDqzgCSgIKcr5kv71 utUg== X-Gm-Message-State: AA6/9RkiysmnMt1gLqChOpFc8PPZIlztD/jGheD8TmO0D6SJYAaZa55xlyibHMgh66F3tmG6 X-Received: by 10.194.95.105 with SMTP id dj9mr14577123wjb.20.1475768981868; Thu, 06 Oct 2016 08:49:41 -0700 (PDT) Received: from localhost.localdomain ([197.128.55.6]) by smtp.gmail.com with ESMTPSA id x124sm36001176wmf.22.2016.10.06.08.49.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 06 Oct 2016 08:49:41 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: airlied@linux.ie, bskeggs@redhat.com, gnurou@gmail.com, Ard Biesheuvel Subject: [PATCH v5 2/3] drm/nouveau/fb/gf100: defer DMA mapping of scratch page to oneinit() hook Date: Thu, 6 Oct 2016 16:49:29 +0100 Message-Id: <1475768970-32512-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> References: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The 100c10 scratch page is mapped using dma_map_page() before the TTM layer has had a chance to set the DMA mask. This means we are still running with the default of 32 when this code executes, and this causes problems for platforms with no memory below 4 GB (such as AMD Seattle) So move the dma_map_page() to the .oneinit hook, which executes after the DMA mask has been set. Signed-off-by: Ard Biesheuvel --- drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c | 31 ++++++++++++-------- 1 file changed, 19 insertions(+), 12 deletions(-) -- 2.7.4 diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c index 76433cc66fff..c1995c0024ef 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c @@ -50,24 +50,39 @@ gf100_fb_intr(struct nvkm_fb *base) } int -gf100_fb_oneinit(struct nvkm_fb *fb) +gf100_fb_oneinit(struct nvkm_fb *base) { - struct nvkm_device *device = fb->subdev.device; + struct gf100_fb *fb = gf100_fb(base); + struct nvkm_device *device = fb->base.subdev.device; int ret, size = 0x1000; size = nvkm_longopt(device->cfgopt, "MmuDebugBufferSize", size); size = min(size, 0x1000); ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, size, 0x1000, - false, &fb->mmu_rd); + false, &base->mmu_rd); if (ret) return ret; ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, size, 0x1000, - false, &fb->mmu_wr); + false, &base->mmu_wr); if (ret) return ret; + fb->r100c10_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!fb->r100c10_page) { + nvkm_error(&fb->base.subdev, "failed 100c10 page alloc\n"); + return -ENOMEM; + } + + fb->r100c10 = dma_map_page(device->dev, fb->r100c10_page, 0, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(device->dev, fb->r100c10)) { + nvkm_error(&fb->base.subdev, "failed to map 100c10 page\n"); + __free_page(fb->r100c10_page); + return -EFAULT; + } + return 0; } @@ -123,14 +138,6 @@ gf100_fb_new_(const struct nvkm_fb_func *func, struct nvkm_device *device, nvkm_fb_ctor(func, device, index, &fb->base); *pfb = &fb->base; - fb->r100c10_page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (fb->r100c10_page) { - fb->r100c10 = dma_map_page(device->dev, fb->r100c10_page, 0, - PAGE_SIZE, DMA_BIDIRECTIONAL); - if (dma_mapping_error(device->dev, fb->r100c10)) - return -EFAULT; - } - return 0; }