From patchwork Mon Mar 5 16:48:40 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 7099 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id BC11523DC3 for ; Mon, 5 Mar 2012 16:49:28 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 7CCA5A18159 for ; Mon, 5 Mar 2012 16:49:28 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id e36so7746420iag.11 for ; Mon, 05 Mar 2012 08:49:28 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.197.135 with SMTP id iu7mr6192897igc.50.1330966168277; Mon, 05 Mar 2012 08:49:28 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.53.18 with SMTP id k18csp30896ibg; Mon, 5 Mar 2012 08:49:27 -0800 (PST) Received: by 10.236.155.6 with SMTP id i6mr21650618yhk.87.1330966167312; Mon, 05 Mar 2012 08:49:27 -0800 (PST) Received: from mail-gx0-f178.google.com (mail-gx0-f178.google.com [209.85.161.178]) by mx.google.com with ESMTPS id o2si16926353yhn.21.2012.03.05.08.49.27 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 05 Mar 2012 08:49:27 -0800 (PST) Received-SPF: pass (google.com: domain of robdclark@gmail.com designates 209.85.161.178 as permitted sender) client-ip=209.85.161.178; Authentication-Results: mx.google.com; spf=pass (google.com: domain of robdclark@gmail.com designates 209.85.161.178 as permitted sender) smtp.mail=robdclark@gmail.com; dkim=pass header.i=@gmail.com Received: by mail-gx0-f178.google.com with SMTP id o1so2032559ggn.37 for ; Mon, 05 Mar 2012 08:49:27 -0800 (PST) Received-SPF: pass (google.com: domain of robdclark@gmail.com designates 10.236.185.4 as permitted sender) client-ip=10.236.185.4; Received: from mr.google.com ([10.236.185.4]) by 10.236.185.4 with SMTP id t4mr20260820yhm.129.1330966167142 (num_hops = 1); Mon, 05 Mar 2012 08:49:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; bh=mRerrvLolNehZqhT1qKrrNvauf2EUJwCNBk4oILdt1w=; b=nv4gIwCGDZpp3QtzzygH7nFVYBhNu5Nn62ySpZSGZNWfkmsgDf0uqdNlZKl32kpbQI SXCu89xuMsG/LQko3qjPsYuuu0eT6PTXidT/0D3UrIOTusoUjml4yUgtEEA1jnFuTBLJ qx6d8JR79eNPQ8Rmy2C4dlzHMB+3DlRUFvf1d0Aa0b3idVxra5zTBtoxTWJRlEZNdLIM DsLWAMeEFXZEQvz8n6N1+fi7IPF/DhOCXZdG4bxHHZ9wh9VY3kaTxQaMOwL0+TLPpN98 3DPlAFN4pFKg2D6BqKD6NipohoxG/sEzfOBlQD+uYbIZH7b1EL7gA9Sm+rJm7a3/qAbv /BEQ== Received: by 10.236.185.4 with SMTP id t4mr16011260yhm.129.1330966167090; Mon, 05 Mar 2012 08:49:27 -0800 (PST) Received: from localhost (dragon.ti.com. [192.94.94.33]) by mx.google.com with ESMTPS id q14sm25370311anj.9.2012.03.05.08.49.25 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 05 Mar 2012 08:49:26 -0800 (PST) Sender: Rob Clark From: Rob Clark To: dri-devel@lists.freedesktop.org, linux-omap@vger.kernel.org Cc: patches@linaro.org, Greg KH , Tomi Valkeinen , Andy Gross , Rob Clark Subject: [PATCH 10/10] staging: drm/omap: mmap of tiled buffers with stride >4kb Date: Mon, 5 Mar 2012 10:48:40 -0600 Message-Id: <1330966120-28582-11-git-send-email-rob.clark@linaro.org> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1330966120-28582-1-git-send-email-rob.clark@linaro.org> References: <1330966120-28582-1-git-send-email-rob.clark@linaro.org> X-Gm-Message-State: ALoCoQk+ryJV98XkYdsfwTSGXw3z+PjYAWTPIjiZqru0FSxVurzg2ywYPGigsmE6WqbwkpxbDeRd From: Rob Clark Deal with the case of buffers with virtual stride larger than one page in fault_2d(). Signed-off-by: Rob Clark --- drivers/staging/omapdrm/omap_gem.c | 86 ++++++++++++++++++++++++----------- 1 files changed, 59 insertions(+), 27 deletions(-) diff --git a/drivers/staging/omapdrm/omap_gem.c b/drivers/staging/omapdrm/omap_gem.c index 5abd294..921f058 100644 --- a/drivers/staging/omapdrm/omap_gem.c +++ b/drivers/staging/omapdrm/omap_gem.c @@ -153,10 +153,23 @@ static void evict_entry(struct drm_gem_object *obj, enum tiler_fmt fmt, struct usergart_entry *entry) { if (obj->dev->dev_mapping) { - size_t size = PAGE_SIZE * usergart[fmt].height; + struct omap_gem_object *omap_obj = to_omap_bo(obj); + int n = usergart[fmt].height; + size_t size = PAGE_SIZE * n; loff_t off = mmap_offset(obj) + (entry->obj_pgoff << PAGE_SHIFT); - unmap_mapping_range(obj->dev->dev_mapping, off, size, 1); + const int m = 1 + ((omap_obj->width << fmt) / PAGE_SIZE); + if (m > 1) { + int i; + /* if stride > than PAGE_SIZE then sparse mapping: */ + for (i = n; i > 0; i--) { + unmap_mapping_range(obj->dev->dev_mapping, + off, PAGE_SIZE, 1); + off += PAGE_SIZE * m; + } + } else { + unmap_mapping_range(obj->dev->dev_mapping, off, size, 1); + } } entry->obj = NULL; @@ -342,26 +355,39 @@ static int fault_2d(struct drm_gem_object *obj, void __user *vaddr; int i, ret, slots; - if (!usergart) - return -EFAULT; - - /* TODO: this fxn might need a bit tweaking to deal w/ tiled buffers - * that are wider than 4kb + /* + * Note the height of the slot is also equal to the number of pages + * that need to be mapped in to fill 4kb wide CPU page. If the slot + * height is 64, then 64 pages fill a 4kb wide by 64 row region. + */ + const int n = usergart[fmt].height; + const int n_shift = usergart[fmt].height_shift; + + /* + * If buffer width in bytes > PAGE_SIZE then the virtual stride is + * rounded up to next multiple of PAGE_SIZE.. this need to be taken + * into account in some of the math, so figure out virtual stride + * in pages */ + const int m = 1 + ((omap_obj->width << fmt) / PAGE_SIZE); /* We don't use vmf->pgoff since that has the fake offset: */ pgoff = ((unsigned long)vmf->virtual_address - vma->vm_start) >> PAGE_SHIFT; - /* actual address we start mapping at is rounded down to previous slot + /* + * Actual address we start mapping at is rounded down to previous slot * boundary in the y direction: */ - base_pgoff = round_down(pgoff, usergart[fmt].height); - vaddr = vmf->virtual_address - ((pgoff - base_pgoff) << PAGE_SHIFT); - entry = &usergart[fmt].entry[usergart[fmt].last]; + base_pgoff = round_down(pgoff, m << n_shift); + /* figure out buffer width in slots */ slots = omap_obj->width >> usergart[fmt].slot_shift; + vaddr = vmf->virtual_address - ((pgoff - base_pgoff) << PAGE_SHIFT); + + entry = &usergart[fmt].entry[usergart[fmt].last]; + /* evict previous buffer using this usergart entry, if any: */ if (entry->obj) evict_entry(entry->obj, fmt, entry); @@ -369,23 +395,30 @@ static int fault_2d(struct drm_gem_object *obj, entry->obj = obj; entry->obj_pgoff = base_pgoff; - /* now convert base_pgoff to phys offset from virt offset: - */ - base_pgoff = (base_pgoff >> usergart[fmt].height_shift) * slots; - - /* map in pages. Note the height of the slot is also equal to the - * number of pages that need to be mapped in to fill 4kb wide CPU page. - * If the height is 64, then 64 pages fill a 4kb wide by 64 row region. - * Beyond the valid pixel part of the buffer, we set pages[i] to NULL to - * get a dummy page mapped in.. if someone reads/writes it they will get - * random/undefined content, but at least it won't be corrupting - * whatever other random page used to be mapped in, or other undefined - * behavior. + /* now convert base_pgoff to phys offset from virt offset: */ + base_pgoff = (base_pgoff >> n_shift) * slots; + + /* for wider-than 4k.. figure out which part of the slot-row we want: */ + if (m > 1) { + int off = pgoff % m; + entry->obj_pgoff += off; + base_pgoff /= m; + slots = min(slots - (off << n_shift), n); + base_pgoff += off << n_shift; + vaddr += off << PAGE_SHIFT; + } + + /* + * Map in pages. Beyond the valid pixel part of the buffer, we set + * pages[i] to NULL to get a dummy page mapped in.. if someone + * reads/writes it they will get random/undefined content, but at + * least it won't be corrupting whatever other random page used to + * be mapped in, or other undefined behavior. */ memcpy(pages, &omap_obj->pages[base_pgoff], sizeof(struct page *) * slots); memset(pages + slots, 0, - sizeof(struct page *) * (usergart[fmt].height - slots)); + sizeof(struct page *) * (n - slots)); ret = tiler_pin(entry->block, pages, ARRAY_SIZE(pages), 0, true); if (ret) { @@ -393,16 +426,15 @@ static int fault_2d(struct drm_gem_object *obj, return ret; } - i = usergart[fmt].height; pfn = entry->paddr >> PAGE_SHIFT; VERB("Inserting %p pfn %lx, pa %lx", vmf->virtual_address, pfn, pfn << PAGE_SHIFT); - while (i--) { + for (i = n; i > 0; i--) { vm_insert_mixed(vma, (unsigned long)vaddr, pfn); pfn += usergart[fmt].stride_pfn; - vaddr += PAGE_SIZE; + vaddr += PAGE_SIZE * m; } /* simple round-robin: */