diff mbox series

[PATCHv3,06/30] drm/omap: Add support for render nodes

Message ID 1490706496-4959-7-git-send-email-tomi.valkeinen@ti.com
State Superseded
Headers show
Series None | expand

Commit Message

Tomi Valkeinen March 28, 2017, 1:07 p.m. UTC
From: Hemant Hariyani <hemanthariyani@ti.com>

Add support for render nodes in omap driver and allow required
ioctls to be accessible via render nodes.

This enables unprivileged clients to allocate resources like GEM buffers
for rendering their content into. Mode setting (KMS ioctls) is not
allowed using render nodes. These buffers are then shared with
a previleged process (e.g compositor) that has mode setting access.

An example of this use case is Android where the hardware composer is
the only master and has mode setting access. Every other client then
uses render node(e.g /dev/dri/renderD128 to allocate and use its buffers.

Signed-off-by: Hemant Hariyani <hemanthariyani@ti.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/gpu/drm/omapdrm/omap_drv.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

Comments

Laurent Pinchart March 29, 2017, 8:22 a.m. UTC | #1
Hi Tomi,

Thank you for the patch.

On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
> From: Hemant Hariyani <hemanthariyani@ti.com>
> 
> Add support for render nodes in omap driver and allow required
> ioctls to be accessible via render nodes.

But the OMAP DSS doesn't perform rendering... This seems an abuse of render 
nodes, I think the API should instead be implemented by the GPU driver.

> This enables unprivileged clients to allocate resources like GEM buffers
> for rendering their content into. Mode setting (KMS ioctls) is not
> allowed using render nodes. These buffers are then shared with
> a previleged process (e.g compositor) that has mode setting access.
> 
> An example of this use case is Android where the hardware composer is
> the only master and has mode setting access. Every other client then
> uses render node(e.g /dev/dri/renderD128 to allocate and use its buffers.
> 
> Signed-off-by: Hemant Hariyani <hemanthariyani@ti.com>
> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
> ---
>  drivers/gpu/drm/omapdrm/omap_drv.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c
> b/drivers/gpu/drm/omapdrm/omap_drv.c index fe83efbbf127..ce0a1c04403d
> 100644
> --- a/drivers/gpu/drm/omapdrm/omap_drv.c
> +++ b/drivers/gpu/drm/omapdrm/omap_drv.c
> @@ -629,12 +629,18 @@ static int ioctl_gem_info(struct drm_device *dev, void
> *data, }
> 
>  static const struct drm_ioctl_desc ioctls[DRM_COMMAND_END -
> DRM_COMMAND_BASE] = { -	DRM_IOCTL_DEF_DRV(OMAP_GET_PARAM, 
ioctl_get_param,
> DRM_AUTH),
> -	DRM_IOCTL_DEF_DRV(OMAP_SET_PARAM, ioctl_set_param,
> DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), -	DRM_IOCTL_DEF_DRV(OMAP_GEM_NEW,
> ioctl_gem_new, DRM_AUTH),
> -	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_PREP, ioctl_gem_cpu_prep, DRM_AUTH),
> -	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_FINI, ioctl_gem_cpu_fini, DRM_AUTH),
> -	DRM_IOCTL_DEF_DRV(OMAP_GEM_INFO, ioctl_gem_info, DRM_AUTH),
> +	DRM_IOCTL_DEF_DRV(OMAP_GET_PARAM, ioctl_get_param,
> +			  DRM_AUTH | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(OMAP_SET_PARAM, ioctl_set_param,
> +			  DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF_DRV(OMAP_GEM_NEW, ioctl_gem_new,
> +			  DRM_AUTH | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_PREP, ioctl_gem_cpu_prep,
> +			  DRM_AUTH | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_FINI, ioctl_gem_cpu_fini,
> +			  DRM_AUTH | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(OMAP_GEM_INFO, ioctl_gem_info,
> +			  DRM_AUTH | DRM_RENDER_ALLOW),
>  };
> 
>  /*
> @@ -724,7 +730,7 @@ static const struct file_operations omapdriver_fops = {
> 
>  static struct drm_driver omap_drm_driver = {
>  	.driver_features = DRIVER_MODESET | DRIVER_GEM  | DRIVER_PRIME |
> -		DRIVER_ATOMIC,
> +		DRIVER_ATOMIC | DRIVER_RENDER,
>  	.open = dev_open,
>  	.lastclose = dev_lastclose,
>  #ifdef CONFIG_DEBUG_FS
Tomi Valkeinen March 29, 2017, 8:58 a.m. UTC | #2
On 29/03/17 11:22, Laurent Pinchart wrote:
> Hi Tomi,

> 

> Thank you for the patch.

> 

> On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:

>> From: Hemant Hariyani <hemanthariyani@ti.com>

>>

>> Add support for render nodes in omap driver and allow required

>> ioctls to be accessible via render nodes.

> 

> But the OMAP DSS doesn't perform rendering... This seems an abuse of render 

> nodes, I think the API should instead be implemented by the GPU driver.


I agree that the GPU use case described in the patch sounds a bit odd.
Why not allocate from the GPU driver instead. But here a particular
issue is that to get TILER buffers we need to ask them from the omapdrm.
Probably TILER should not be part of omapdrm, but then, where should it
be...

We also have writeback in DSS, which can function as a memory to memory
or capture device. That's not supported in the mainline, but we have
support in the TI kernel.

And how about a case where you have the privileged process as KMS
master, and another process wants to draw to a buffer with the CPU, and
then give the buffer to the privileged process for displaying.

So, yes, DSS is not a renderer (well, WB is kind of rendering), but
isn't it a valid use case to allocate a buffer from omapdrm?

Also, where should a buffer be allocated from, generally? I usually
think that if a buffer is going to DSS, I allocate it from omapdrm. Then
again, getting it from the source sounds ok also, i.e. from a v4l2
capture driver... But if we get it from the source, it may not be usable
by all the components in the processing pipeline (e.g, SGX requires 32
pixel aligned buffers).

 Tomi
Laurent Pinchart March 29, 2017, 12:20 p.m. UTC | #3
Hi Tomi,

(CC'ing Daniel and David)

On Wednesday 29 Mar 2017 11:58:23 Tomi Valkeinen wrote:
> On 29/03/17 11:22, Laurent Pinchart wrote:
> > Hi Tomi,
> > 
> > Thank you for the patch.
> > 
> > On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
> >> From: Hemant Hariyani <hemanthariyani@ti.com>
> >> 
> >> Add support for render nodes in omap driver and allow required
> >> ioctls to be accessible via render nodes.
> > 
> > But the OMAP DSS doesn't perform rendering... This seems an abuse of
> > render nodes, I think the API should instead be implemented by the GPU
> > driver.
> 
> I agree that the GPU use case described in the patch sounds a bit odd.
> Why not allocate from the GPU driver instead. But here a particular
> issue is that to get TILER buffers we need to ask them from the omapdrm.
> Probably TILER should not be part of omapdrm, but then, where should it
> be...
> 
> We also have writeback in DSS, which can function as a memory to memory
> or capture device. That's not supported in the mainline, but we have
> support in the TI kernel.
> 
> And how about a case where you have the privileged process as KMS
> master, and another process wants to draw to a buffer with the CPU, and
> then give the buffer to the privileged process for displaying.
> 
> So, yes, DSS is not a renderer (well, WB is kind of rendering), but
> isn't it a valid use case to allocate a buffer from omapdrm?

It could be a valid use case, but it's still an API abuse. It starts sounding 
like a DRM core issue to me. The DRIVER_RENDER flag is not documented, so its 
exact meaning isn't defined. I thought it was supposed to flag the device as a 
renderer (GPU).

commit 1793126fcebd7c18834f95d43b55e387a8803aa8
Author: David Herrmann <dh.herrmann@gmail.com>
Date:   Sun Aug 25 18:29:00 2013 +0200

    drm: implement experimental render nodes
    
    Render nodes provide an API for userspace to use non-privileged GPU
    commands without any running DRM-Master. It is useful for offscreen
    rendering, GPGPU clients, and normal render clients which do not perform
    modesetting.

    [...]

You instead use the flag as a way to enable unprivileged clients to allocate 
buffers. If that's what the flag should mean, I wonder if there would be a use 
case for *not* setting it.

> Also, where should a buffer be allocated from, generally? I usually
> think that if a buffer is going to DSS, I allocate it from omapdrm. Then
> again, getting it from the source sounds ok also, i.e. from a v4l2
> capture driver... But if we get it from the source, it may not be usable
> by all the components in the processing pipeline (e.g, SGX requires 32
> pixel aligned buffers).
David Herrmann March 29, 2017, 12:51 p.m. UTC | #4
Hi

On Wed, Mar 29, 2017 at 2:20 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi Tomi,
>
> (CC'ing Daniel and David)
>
> On Wednesday 29 Mar 2017 11:58:23 Tomi Valkeinen wrote:
>> On 29/03/17 11:22, Laurent Pinchart wrote:
>> > Hi Tomi,
>> >
>> > Thank you for the patch.
>> >
>> > On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
>> >> From: Hemant Hariyani <hemanthariyani@ti.com>
>> >>
>> >> Add support for render nodes in omap driver and allow required
>> >> ioctls to be accessible via render nodes.
>> >
>> > But the OMAP DSS doesn't perform rendering... This seems an abuse of
>> > render nodes, I think the API should instead be implemented by the GPU
>> > driver.
>>
>> I agree that the GPU use case described in the patch sounds a bit odd.
>> Why not allocate from the GPU driver instead. But here a particular
>> issue is that to get TILER buffers we need to ask them from the omapdrm.
>> Probably TILER should not be part of omapdrm, but then, where should it
>> be...
>>
>> We also have writeback in DSS, which can function as a memory to memory
>> or capture device. That's not supported in the mainline, but we have
>> support in the TI kernel.
>>
>> And how about a case where you have the privileged process as KMS
>> master, and another process wants to draw to a buffer with the CPU, and
>> then give the buffer to the privileged process for displaying.
>>
>> So, yes, DSS is not a renderer (well, WB is kind of rendering), but
>> isn't it a valid use case to allocate a buffer from omapdrm?
>
> It could be a valid use case, but it's still an API abuse. It starts sounding
> like a DRM core issue to me. The DRIVER_RENDER flag is not documented, so its
> exact meaning isn't defined. I thought it was supposed to flag the device as a
> renderer (GPU).
>
> commit 1793126fcebd7c18834f95d43b55e387a8803aa8
> Author: David Herrmann <dh.herrmann@gmail.com>
> Date:   Sun Aug 25 18:29:00 2013 +0200
>
>     drm: implement experimental render nodes
>
>     Render nodes provide an API for userspace to use non-privileged GPU
>     commands without any running DRM-Master. It is useful for offscreen
>     rendering, GPGPU clients, and normal render clients which do not perform
>     modesetting.
>
>     [...]
>
> You instead use the flag as a way to enable unprivileged clients to allocate
> buffers. If that's what the flag should mean, I wonder if there would be a use
> case for *not* setting it.

Correct. You can (and should) enable all your sandboxed commands on
render nodes. That is, if a command only affects the issuing client,
then it is safe for render-nodes. If two clients have a file-context
opened on the render node, they should be unable to affect each other
(minus accounting, resource allocation, etc.).

The name is historic (I did not come up with it either, but failed at
renaming it..). The DRIVER_RENDER flag only controls whether the
render-node is actually instantiated. I will not object doing that
unconditionally. It will create some useless nodes for legacy drivers,
but we should not care.

Thanks
David
Laurent Pinchart March 29, 2017, 9:42 p.m. UTC | #5
Hi David,

On Wednesday 29 Mar 2017 14:51:48 David Herrmann wrote:
> On Wed, Mar 29, 2017 at 2:20 PM, Laurent Pinchart wrote:
> > On Wednesday 29 Mar 2017 11:58:23 Tomi Valkeinen wrote:
> >> On 29/03/17 11:22, Laurent Pinchart wrote:
> >>> On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
> >>>> From: Hemant Hariyani <hemanthariyani@ti.com>
> >>>> 
> >>>> Add support for render nodes in omap driver and allow required
> >>>> ioctls to be accessible via render nodes.
> >>> 
> >>> But the OMAP DSS doesn't perform rendering... This seems an abuse of
> >>> render nodes, I think the API should instead be implemented by the GPU
> >>> driver.
> >> 
> >> I agree that the GPU use case described in the patch sounds a bit odd.
> >> Why not allocate from the GPU driver instead. But here a particular
> >> issue is that to get TILER buffers we need to ask them from the omapdrm.
> >> Probably TILER should not be part of omapdrm, but then, where should it
> >> be...
> >> 
> >> We also have writeback in DSS, which can function as a memory to memory
> >> or capture device. That's not supported in the mainline, but we have
> >> support in the TI kernel.
> >> 
> >> And how about a case where you have the privileged process as KMS
> >> master, and another process wants to draw to a buffer with the CPU, and
> >> then give the buffer to the privileged process for displaying.
> >> 
> >> So, yes, DSS is not a renderer (well, WB is kind of rendering), but
> >> isn't it a valid use case to allocate a buffer from omapdrm?
> > 
> > It could be a valid use case, but it's still an API abuse. It starts
> > sounding like a DRM core issue to me. The DRIVER_RENDER flag is not
> > documented, so its exact meaning isn't defined. I thought it was supposed
> > to flag the device as a renderer (GPU).
> > 
> > commit 1793126fcebd7c18834f95d43b55e387a8803aa8
> > Author: David Herrmann <dh.herrmann@gmail.com>
> > Date:   Sun Aug 25 18:29:00 2013 +0200
> > 
> >     drm: implement experimental render nodes
> >     
> >     Render nodes provide an API for userspace to use non-privileged GPU
> >     commands without any running DRM-Master. It is useful for offscreen
> >     rendering, GPGPU clients, and normal render clients which do not
> >     perform
> >     modesetting.
> >     
> >     [...]
> > 
> > You instead use the flag as a way to enable unprivileged clients to
> > allocate buffers. If that's what the flag should mean, I wonder if there
> > would be a use case for *not* setting it.
> 
> Correct. You can (and should) enable all your sandboxed commands on
> render nodes. That is, if a command only affects the issuing client,
> then it is safe for render-nodes. If two clients have a file-context
> opened on the render node, they should be unable to affect each other
> (minus accounting, resource allocation, etc.).
> 
> The name is historic (I did not come up with it either, but failed at
> renaming it..). The DRIVER_RENDER flag only controls whether the
> render-node is actually instantiated. I will not object doing that
> unconditionally. It will create some useless nodes for legacy drivers,
> but we should not care.

Couldn't we achieve the same effect without render nodes, by allowing GEM 
object allocation on the main DRM node by unauthenticated clients ?
David Herrmann March 30, 2017, 6:44 a.m. UTC | #6
Hey

On Wed, Mar 29, 2017 at 11:42 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi David,
>
> On Wednesday 29 Mar 2017 14:51:48 David Herrmann wrote:
>> On Wed, Mar 29, 2017 at 2:20 PM, Laurent Pinchart wrote:
>> > On Wednesday 29 Mar 2017 11:58:23 Tomi Valkeinen wrote:
>> >> On 29/03/17 11:22, Laurent Pinchart wrote:
>> >>> On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
>> >>>> From: Hemant Hariyani <hemanthariyani@ti.com>
>> >>>>
>> >>>> Add support for render nodes in omap driver and allow required
>> >>>> ioctls to be accessible via render nodes.
>> >>>
>> >>> But the OMAP DSS doesn't perform rendering... This seems an abuse of
>> >>> render nodes, I think the API should instead be implemented by the GPU
>> >>> driver.
>> >>
>> >> I agree that the GPU use case described in the patch sounds a bit odd.
>> >> Why not allocate from the GPU driver instead. But here a particular
>> >> issue is that to get TILER buffers we need to ask them from the omapdrm.
>> >> Probably TILER should not be part of omapdrm, but then, where should it
>> >> be...
>> >>
>> >> We also have writeback in DSS, which can function as a memory to memory
>> >> or capture device. That's not supported in the mainline, but we have
>> >> support in the TI kernel.
>> >>
>> >> And how about a case where you have the privileged process as KMS
>> >> master, and another process wants to draw to a buffer with the CPU, and
>> >> then give the buffer to the privileged process for displaying.
>> >>
>> >> So, yes, DSS is not a renderer (well, WB is kind of rendering), but
>> >> isn't it a valid use case to allocate a buffer from omapdrm?
>> >
>> > It could be a valid use case, but it's still an API abuse. It starts
>> > sounding like a DRM core issue to me. The DRIVER_RENDER flag is not
>> > documented, so its exact meaning isn't defined. I thought it was supposed
>> > to flag the device as a renderer (GPU).
>> >
>> > commit 1793126fcebd7c18834f95d43b55e387a8803aa8
>> > Author: David Herrmann <dh.herrmann@gmail.com>
>> > Date:   Sun Aug 25 18:29:00 2013 +0200
>> >
>> >     drm: implement experimental render nodes
>> >
>> >     Render nodes provide an API for userspace to use non-privileged GPU
>> >     commands without any running DRM-Master. It is useful for offscreen
>> >     rendering, GPGPU clients, and normal render clients which do not
>> >     perform
>> >     modesetting.
>> >
>> >     [...]
>> >
>> > You instead use the flag as a way to enable unprivileged clients to
>> > allocate buffers. If that's what the flag should mean, I wonder if there
>> > would be a use case for *not* setting it.
>>
>> Correct. You can (and should) enable all your sandboxed commands on
>> render nodes. That is, if a command only affects the issuing client,
>> then it is safe for render-nodes. If two clients have a file-context
>> opened on the render node, they should be unable to affect each other
>> (minus accounting, resource allocation, etc.).
>>
>> The name is historic (I did not come up with it either, but failed at
>> renaming it..). The DRIVER_RENDER flag only controls whether the
>> render-node is actually instantiated. I will not object doing that
>> unconditionally. It will create some useless nodes for legacy drivers,
>> but we should not care.
>
> Couldn't we achieve the same effect without render nodes, by allowing GEM
> object allocation on the main DRM node by unauthenticated clients ?

Using a different inode makes sure you can sand-box the node. That is,
you can now easily mknod a render node in containers and sandboxes
without risk of exposing any other APIs.

I guess you could achieve something similar by careful selection of
which privs to pass to the container. But we preferred a clean cut
back then.

Thanks
David
Daniel Vetter March 30, 2017, 7:43 a.m. UTC | #7
On Thu, Mar 30, 2017 at 08:44:25AM +0200, David Herrmann wrote:
> Hey
> 
> On Wed, Mar 29, 2017 at 11:42 PM, Laurent Pinchart
> <laurent.pinchart@ideasonboard.com> wrote:
> > Hi David,
> >
> > On Wednesday 29 Mar 2017 14:51:48 David Herrmann wrote:
> >> On Wed, Mar 29, 2017 at 2:20 PM, Laurent Pinchart wrote:
> >> > On Wednesday 29 Mar 2017 11:58:23 Tomi Valkeinen wrote:
> >> >> On 29/03/17 11:22, Laurent Pinchart wrote:
> >> >>> On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
> >> >>>> From: Hemant Hariyani <hemanthariyani@ti.com>
> >> >>>>
> >> >>>> Add support for render nodes in omap driver and allow required
> >> >>>> ioctls to be accessible via render nodes.
> >> >>>
> >> >>> But the OMAP DSS doesn't perform rendering... This seems an abuse of
> >> >>> render nodes, I think the API should instead be implemented by the GPU
> >> >>> driver.
> >> >>
> >> >> I agree that the GPU use case described in the patch sounds a bit odd.
> >> >> Why not allocate from the GPU driver instead. But here a particular
> >> >> issue is that to get TILER buffers we need to ask them from the omapdrm.
> >> >> Probably TILER should not be part of omapdrm, but then, where should it
> >> >> be...
> >> >>
> >> >> We also have writeback in DSS, which can function as a memory to memory
> >> >> or capture device. That's not supported in the mainline, but we have
> >> >> support in the TI kernel.
> >> >>
> >> >> And how about a case where you have the privileged process as KMS
> >> >> master, and another process wants to draw to a buffer with the CPU, and
> >> >> then give the buffer to the privileged process for displaying.
> >> >>
> >> >> So, yes, DSS is not a renderer (well, WB is kind of rendering), but
> >> >> isn't it a valid use case to allocate a buffer from omapdrm?
> >> >
> >> > It could be a valid use case, but it's still an API abuse. It starts
> >> > sounding like a DRM core issue to me. The DRIVER_RENDER flag is not
> >> > documented, so its exact meaning isn't defined. I thought it was supposed
> >> > to flag the device as a renderer (GPU).
> >> >
> >> > commit 1793126fcebd7c18834f95d43b55e387a8803aa8
> >> > Author: David Herrmann <dh.herrmann@gmail.com>
> >> > Date:   Sun Aug 25 18:29:00 2013 +0200
> >> >
> >> >     drm: implement experimental render nodes
> >> >
> >> >     Render nodes provide an API for userspace to use non-privileged GPU
> >> >     commands without any running DRM-Master. It is useful for offscreen
> >> >     rendering, GPGPU clients, and normal render clients which do not
> >> >     perform
> >> >     modesetting.
> >> >
> >> >     [...]
> >> >
> >> > You instead use the flag as a way to enable unprivileged clients to
> >> > allocate buffers. If that's what the flag should mean, I wonder if there
> >> > would be a use case for *not* setting it.
> >>
> >> Correct. You can (and should) enable all your sandboxed commands on
> >> render nodes. That is, if a command only affects the issuing client,
> >> then it is safe for render-nodes. If two clients have a file-context
> >> opened on the render node, they should be unable to affect each other
> >> (minus accounting, resource allocation, etc.).
> >>
> >> The name is historic (I did not come up with it either, but failed at
> >> renaming it..). The DRIVER_RENDER flag only controls whether the
> >> render-node is actually instantiated. I will not object doing that
> >> unconditionally. It will create some useless nodes for legacy drivers,
> >> but we should not care.
> >
> > Couldn't we achieve the same effect without render nodes, by allowing GEM
> > object allocation on the main DRM node by unauthenticated clients ?
> 
> Using a different inode makes sure you can sand-box the node. That is,
> you can now easily mknod a render node in containers and sandboxes
> without risk of exposing any other APIs.
> 
> I guess you could achieve something similar by careful selection of
> which privs to pass to the container. But we preferred a clean cut
> back then.

btw would be lovely if you folks could review the render node docs and
make sure they reflect all the nuances discussed here properly:

https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#render-nodes

And if not, please send a patch.

Thanks, Daniel
diff mbox series

Patch

diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
index fe83efbbf127..ce0a1c04403d 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.c
+++ b/drivers/gpu/drm/omapdrm/omap_drv.c
@@ -629,12 +629,18 @@  static int ioctl_gem_info(struct drm_device *dev, void *data,
 }
 
 static const struct drm_ioctl_desc ioctls[DRM_COMMAND_END - DRM_COMMAND_BASE] = {
-	DRM_IOCTL_DEF_DRV(OMAP_GET_PARAM, ioctl_get_param, DRM_AUTH),
-	DRM_IOCTL_DEF_DRV(OMAP_SET_PARAM, ioctl_set_param, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
-	DRM_IOCTL_DEF_DRV(OMAP_GEM_NEW, ioctl_gem_new, DRM_AUTH),
-	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_PREP, ioctl_gem_cpu_prep, DRM_AUTH),
-	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_FINI, ioctl_gem_cpu_fini, DRM_AUTH),
-	DRM_IOCTL_DEF_DRV(OMAP_GEM_INFO, ioctl_gem_info, DRM_AUTH),
+	DRM_IOCTL_DEF_DRV(OMAP_GET_PARAM, ioctl_get_param,
+			  DRM_AUTH | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(OMAP_SET_PARAM, ioctl_set_param,
+			  DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF_DRV(OMAP_GEM_NEW, ioctl_gem_new,
+			  DRM_AUTH | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_PREP, ioctl_gem_cpu_prep,
+			  DRM_AUTH | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_FINI, ioctl_gem_cpu_fini,
+			  DRM_AUTH | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(OMAP_GEM_INFO, ioctl_gem_info,
+			  DRM_AUTH | DRM_RENDER_ALLOW),
 };
 
 /*
@@ -724,7 +730,7 @@  static const struct file_operations omapdriver_fops = {
 
 static struct drm_driver omap_drm_driver = {
 	.driver_features = DRIVER_MODESET | DRIVER_GEM  | DRIVER_PRIME |
-		DRIVER_ATOMIC,
+		DRIVER_ATOMIC | DRIVER_RENDER,
 	.open = dev_open,
 	.lastclose = dev_lastclose,
 #ifdef CONFIG_DEBUG_FS