From patchwork Thu Sep 15 22:47:58 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 4110 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id A7A1523EFB for ; Thu, 15 Sep 2011 22:48:24 +0000 (UTC) Received: from mail-fx0-f52.google.com (mail-fx0-f52.google.com [209.85.161.52]) by fiordland.canonical.com (Postfix) with ESMTP id 90B9FA18301 for ; Thu, 15 Sep 2011 22:48:24 +0000 (UTC) Received: by fxe23 with SMTP id 23so1597147fxe.11 for ; Thu, 15 Sep 2011 15:48:24 -0700 (PDT) Received: by 10.223.74.89 with SMTP id t25mr764251faj.65.1316126904368; Thu, 15 Sep 2011 15:48:24 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.152.11.8 with SMTP id m8cs109767lab; Thu, 15 Sep 2011 15:48:23 -0700 (PDT) Received: by 10.236.76.41 with SMTP id a29mr10825649yhe.40.1316126897039; Thu, 15 Sep 2011 15:48:17 -0700 (PDT) Received: from mail-gw0-f50.google.com (mail-gw0-f50.google.com [74.125.83.50]) by mx.google.com with ESMTPS id c2si5970467yhe.153.2011.09.15.15.48.15 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Sep 2011 15:48:17 -0700 (PDT) Received-SPF: pass (google.com: domain of robdclark@gmail.com designates 74.125.83.50 as permitted sender) client-ip=74.125.83.50; Authentication-Results: mx.google.com; spf=pass (google.com: domain of robdclark@gmail.com designates 74.125.83.50 as permitted sender) smtp.mail=robdclark@gmail.com; dkim=pass (test mode) header.i=@gmail.com Received: by gwj16 with SMTP id 16so3317249gwj.37 for ; Thu, 15 Sep 2011 15:48:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; bh=NwCU2WPh3kr+YoRVJ48DsHv+NuBcYjzpGYTS6vKCzCs=; b=TrlIdkdIBnUqYKGvRsX8jUn+rkr+Vy21pH8950H7nMZbsMlKnFk81VuCSv45AQWytS +vH+lpsfaa/a2UT6G7KMPZfR/jBg1WsttaTmHRTfcpXDwBVJp0NrHdygrSt2V2zrxwrO K3lKbqpJAKBh02I2VJq/LUVYIo62aRIkN6Qbk= Received: by 10.236.145.226 with SMTP id p62mr1097253yhj.54.1316126895485; Thu, 15 Sep 2011 15:48:15 -0700 (PDT) Received: from localhost (dragon.ti.com. [192.94.94.33]) by mx.google.com with ESMTPS id s42sm5526736yhs.0.2011.09.15.15.48.13 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Sep 2011 15:48:14 -0700 (PDT) Sender: Rob Clark From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: patches@linaro.org, Rob Clark Subject: [PATCH] drm/gem: add functions to get/put pages Date: Thu, 15 Sep 2011 17:47:58 -0500 Message-Id: <1316126878-29262-1-git-send-email-rob.clark@linaro.org> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1315855286-8182-5-git-send-email-rob.clark@linaro.org> References: <1315855286-8182-5-git-send-email-rob.clark@linaro.org> From: Rob Clark This factors out common code from psb_gtt_attach_pages()/ i915_gem_object_get_pages_gtt() and psb_gtt_detach_pages()/ i915_gem_object_put_pages_gtt(). Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 87 +++++++++++++++++++++++++++++++++++++++++++++ include/drm/drmP.h | 3 ++ 2 files changed, 90 insertions(+), 0 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 396e60c..821ba8a 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -285,6 +285,93 @@ again: } EXPORT_SYMBOL(drm_gem_handle_create); +/** + * drm_gem_get_pages - helper to allocate backing pages for a GEM object + * @obj: obj in question + * @gfpmask: gfp mask of requested pages + */ +struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{ + struct inode *inode; + struct address_space *mapping; + struct page *p, **pages; + int i, npages; + + /* This is the shared memory object that backs the GEM resource */ + inode = obj->filp->f_path.dentry->d_inode; + mapping = inode->i_mapping; + + npages = obj->size >> PAGE_SHIFT; + + pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (pages == NULL) + return ERR_PTR(-ENOMEM); + + gfpmask |= mapping_gfp_mask(mapping); + + for (i = 0; i < npages; i++) { + p = shmem_read_mapping_page_gfp(mapping, i, gfpmask); + if (IS_ERR(p)) + goto fail; + pages[i] = p; + + /* There is a hypothetical issue w/ drivers that require + * buffer memory in the low 4GB.. if the pages are un- + * pinned, and swapped out, they can end up swapped back + * in above 4GB. If pages are already in memory, then + * shmem_read_mapping_page_gfp will ignore the gfpmask, + * even if the already in-memory page disobeys the mask. + * + * It is only a theoretical issue today, because none of + * the devices with this limitation can be populated with + * enough memory to trigger the issue. But this BUG_ON() + * is here as a reminder in case the problem with + * shmem_read_mapping_page_gfp() isn't solved by the time + * it does become a real issue. + * + * See this thread: http://lkml.org/lkml/2011/7/11/238 + */ + BUG_ON((gfpmask & __GFP_DMA32) && + (page_to_pfn(p) >= 0x00100000UL)); + } + + return pages; + +fail: + while (i--) { + page_cache_release(pages[i]); + } + drm_free_large(pages); + return ERR_PTR(PTR_ERR(p)); +} +EXPORT_SYMBOL(drm_gem_get_pages); + +/** + * drm_gem_put_pages - helper to free backing pages for a GEM object + * @obj: obj in question + * @pages: pages to free + */ +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed) +{ + int i, npages; + + npages = obj->size >> PAGE_SHIFT; + + for (i = 0; i < npages; i++) { + if (dirty) + set_page_dirty(pages[i]); + + if (accessed) + mark_page_accessed(pages[i]); + + /* Undo the reference we took when populating the table */ + page_cache_release(pages[i]); + } + + drm_free_large(pages); +} +EXPORT_SYMBOL(drm_gem_put_pages); /** * drm_gem_free_mmap_offset - release a fake mmap offset for an object diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 43538b6..a62d8fe 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1624,6 +1624,9 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) drm_gem_object_unreference_unlocked(obj); } +struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask); +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed); void drm_gem_free_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset(struct drm_gem_object *obj);