Message ID | 20240410130557.31572-22-tzimmermann@suse.de |
---|---|
State | New |
Headers | show |
Series | drm: Provide fbdev emulation per memory manager | expand |
Thomas Zimmermann <tzimmermann@suse.de> writes: > Add support for damage handling and deferred I/O to fbdev-dma. This > enables fbdev-dma to support all DMA-memory-based DRM drivers, even > such with a dirty callback in their framebuffers. > > The patch adds the code for deferred I/O and also sets a dedicated > helper for struct fb_ops.fb_mmap that support coherent mappings. > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> > --- > drivers/gpu/drm/drm_fbdev_dma.c | 65 ++++++++++++++++++++++++++------- > 1 file changed, 51 insertions(+), 14 deletions(-) > > diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c > index 6c9427bb4053b..8ffd072368bca 100644 > --- a/drivers/gpu/drm/drm_fbdev_dma.c > +++ b/drivers/gpu/drm/drm_fbdev_dma.c > @@ -4,6 +4,7 @@ > > #include <drm/drm_crtc_helper.h> > #include <drm/drm_drv.h> > +#include <drm/drm_fb_dma_helper.h> > #include <drm/drm_fb_helper.h> > #include <drm/drm_framebuffer.h> > #include <drm/drm_gem_dma_helper.h> > @@ -35,6 +36,22 @@ static int drm_fbdev_dma_fb_release(struct fb_info *info, int user) > return 0; > } > > +FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(drm_fbdev_dma, > + drm_fb_helper_damage_range, > + drm_fb_helper_damage_area); > + Shouldn't this be FB_GEN_DEFAULT_DEFERRED_DMAMEM_OPS() instead ? I know that right now the macros are the same but I believe that it was added it for a reason ? > +static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_framebuffer *fb = fb_helper->fb; > + struct drm_gem_dma_object *dma = drm_fb_dma_get_gem_obj(fb, 0); > + > + if (!dma->map_noncoherent) > + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); I noticed that some drivers do: vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); I see that vm_get_page_prot() is a per-architecture function, but I don't know about the implications of getting the pgprot_t from the vma->vm_flags set or just using the current vma->vm_page_prot value... Reviewed-by: Javier Martinez Canillas <javierm@redhat.com> -- Best regards, Javier Martinez Canillas Core Platforms Red Hat
Hi Am 16.04.24 um 14:18 schrieb Javier Martinez Canillas: > Thomas Zimmermann <tzimmermann@suse.de> writes: > >> Add support for damage handling and deferred I/O to fbdev-dma. This >> enables fbdev-dma to support all DMA-memory-based DRM drivers, even >> such with a dirty callback in their framebuffers. >> >> The patch adds the code for deferred I/O and also sets a dedicated >> helper for struct fb_ops.fb_mmap that support coherent mappings. >> >> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> >> --- >> drivers/gpu/drm/drm_fbdev_dma.c | 65 ++++++++++++++++++++++++++------- >> 1 file changed, 51 insertions(+), 14 deletions(-) >> >> diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c >> index 6c9427bb4053b..8ffd072368bca 100644 >> --- a/drivers/gpu/drm/drm_fbdev_dma.c >> +++ b/drivers/gpu/drm/drm_fbdev_dma.c >> @@ -4,6 +4,7 @@ >> >> #include <drm/drm_crtc_helper.h> >> #include <drm/drm_drv.h> >> +#include <drm/drm_fb_dma_helper.h> >> #include <drm/drm_fb_helper.h> >> #include <drm/drm_framebuffer.h> >> #include <drm/drm_gem_dma_helper.h> >> @@ -35,6 +36,22 @@ static int drm_fbdev_dma_fb_release(struct fb_info *info, int user) >> return 0; >> } >> >> +FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(drm_fbdev_dma, >> + drm_fb_helper_damage_range, >> + drm_fb_helper_damage_area); >> + > Shouldn't this be FB_GEN_DEFAULT_DEFERRED_DMAMEM_OPS() instead ? > > I know that right now the macros are the same but I believe that it was > added it for a reason ? Oh, thanks for noticing! I asked for that macro specifically for this reason. It went through the omap tree and hadn't arrived in drm-misc-next when I first made these patches. I'll update the patch accordingly. > >> +static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) >> +{ >> + struct drm_fb_helper *fb_helper = info->par; >> + struct drm_framebuffer *fb = fb_helper->fb; >> + struct drm_gem_dma_object *dma = drm_fb_dma_get_gem_obj(fb, 0); >> + >> + if (!dma->map_noncoherent) >> + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); > I noticed that some drivers do: > > vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); > > I see that vm_get_page_prot() is a per-architecture function, but I don't > know about the implications of getting the pgprot_t from the vma->vm_flags > set or just using the current vma->vm_page_prot value... That's in interesting observation. The code in the patch adds a WC flag to the existing vm_page_prot. The code in your example first creates a new vm_page_prot from the vm_flags field. Fbdev drivers generally use the former approach. So where does the original vm_page_prot value come from? (I think that's also the question behind your comment.) I've looked through the kernel's mmap code from the syscall [1] to the place where it invokes the mmap callback. [2] Shortly before doing so, mmap_region() set's vm_page_prot from vm_flags like in your example. [3] I would assume there's no reason for drivers to call vm_get_page_prot() by themselves. DRM drivers specially seem to have the habit of doing so. Best regards Thomas [1] https://elixir.bootlin.com/linux/v6.8/source/arch/x86/kernel/sys_x86_64.c#L86 [2] https://elixir.bootlin.com/linux/v6.8/source/mm/mmap.c#L2829 [3] https://elixir.bootlin.com/linux/v6.8/source/mm/mmap.c#L2824 > > Reviewed-by: Javier Martinez Canillas <javierm@redhat.com> > > -- > Best regards, > > Javier Martinez Canillas > Core Platforms > Red Hat >
Thomas Zimmermann <tzimmermann@suse.de> writes: > Hi > > Am 16.04.24 um 14:18 schrieb Javier Martinez Canillas: >> Thomas Zimmermann <tzimmermann@suse.de> writes: >> [...] >>> +static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) >>> +{ >>> + struct drm_fb_helper *fb_helper = info->par; >>> + struct drm_framebuffer *fb = fb_helper->fb; >>> + struct drm_gem_dma_object *dma = drm_fb_dma_get_gem_obj(fb, 0); >>> + >>> + if (!dma->map_noncoherent) >>> + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); >> I noticed that some drivers do: >> >> vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); >> >> I see that vm_get_page_prot() is a per-architecture function, but I don't >> know about the implications of getting the pgprot_t from the vma->vm_flags >> set or just using the current vma->vm_page_prot value... > > That's in interesting observation. The code in the patch adds a WC flag > to the existing vm_page_prot. The code in your example first creates a > new vm_page_prot from the vm_flags field. Fbdev drivers generally use > the former approach. So where does the original vm_page_prot value come > from? (I think that's also the question behind your comment.) > Yes, also if the vm_flags were set (and where) for this VMA. > I've looked through the kernel's mmap code from the syscall [1] to the > place where it invokes the mmap callback. [2] Shortly before doing so, > mmap_region() set's vm_page_prot from vm_flags like in your example. [3] > I would assume there's no reason for drivers to call vm_get_page_prot() > by themselves. DRM drivers specially seem to have the habit of doing so. > Got it, makes sense. Thanks for taking a look. > Best regards > Thomas >
diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c index 6c9427bb4053b..8ffd072368bca 100644 --- a/drivers/gpu/drm/drm_fbdev_dma.c +++ b/drivers/gpu/drm/drm_fbdev_dma.c @@ -4,6 +4,7 @@ #include <drm/drm_crtc_helper.h> #include <drm/drm_drv.h> +#include <drm/drm_fb_dma_helper.h> #include <drm/drm_fb_helper.h> #include <drm/drm_framebuffer.h> #include <drm/drm_gem_dma_helper.h> @@ -35,6 +36,22 @@ static int drm_fbdev_dma_fb_release(struct fb_info *info, int user) return 0; } +FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(drm_fbdev_dma, + drm_fb_helper_damage_range, + drm_fb_helper_damage_area); + +static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_framebuffer *fb = fb_helper->fb; + struct drm_gem_dma_object *dma = drm_fb_dma_get_gem_obj(fb, 0); + + if (!dma->map_noncoherent) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + + return fb_deferred_io_mmap(info, vma); +} + static void drm_fbdev_dma_fb_destroy(struct fb_info *info) { struct drm_fb_helper *fb_helper = info->par; @@ -51,20 +68,13 @@ static void drm_fbdev_dma_fb_destroy(struct fb_info *info) kfree(fb_helper); } -static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) -{ - struct drm_fb_helper *fb_helper = info->par; - - return drm_gem_prime_mmap(fb_helper->buffer->gem, vma); -} - static const struct fb_ops drm_fbdev_dma_fb_ops = { .owner = THIS_MODULE, .fb_open = drm_fbdev_dma_fb_open, .fb_release = drm_fbdev_dma_fb_release, - __FB_DEFAULT_DMAMEM_OPS_RDWR, + __FB_DEFAULT_DEFERRED_OPS_RDWR(drm_fbdev_dma), DRM_FB_HELPER_DEFAULT_OPS, - __FB_DEFAULT_DMAMEM_OPS_DRAW, + __FB_DEFAULT_DEFERRED_OPS_DRAW(drm_fbdev_dma), .fb_mmap = drm_fbdev_dma_fb_mmap, .fb_destroy = drm_fbdev_dma_fb_destroy, }; @@ -98,10 +108,6 @@ static int drm_fbdev_dma_helper_fb_probe(struct drm_fb_helper *fb_helper, dma_obj = to_drm_gem_dma_obj(buffer->gem); fb = buffer->fb; - if (drm_WARN_ON(dev, fb->funcs->dirty)) { - ret = -ENODEV; /* damage handling not supported; use generic emulation */ - goto err_drm_client_buffer_delete; - } ret = drm_client_buffer_vmap(buffer, &map); if (ret) { @@ -112,7 +118,7 @@ static int drm_fbdev_dma_helper_fb_probe(struct drm_fb_helper *fb_helper, } fb_helper->buffer = buffer; - fb_helper->fb = buffer->fb; + fb_helper->fb = fb; info = drm_fb_helper_alloc_info(fb_helper); if (IS_ERR(info)) { @@ -133,8 +139,19 @@ static int drm_fbdev_dma_helper_fb_probe(struct drm_fb_helper *fb_helper, info->fix.smem_start = page_to_phys(virt_to_page(info->screen_buffer)); info->fix.smem_len = info->screen_size; + /* deferred I/O */ + fb_helper->fbdefio.delay = HZ / 20; + fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io; + + info->fbdefio = &fb_helper->fbdefio; + ret = fb_deferred_io_init(info); + if (ret) + goto err_drm_fb_helper_release_info; + return 0; +err_drm_fb_helper_release_info: + drm_fb_helper_release_info(fb_helper); err_drm_client_buffer_vunmap: fb_helper->fb = NULL; fb_helper->buffer = NULL; @@ -144,8 +161,28 @@ static int drm_fbdev_dma_helper_fb_probe(struct drm_fb_helper *fb_helper, return ret; } +static int drm_fbdev_dma_helper_fb_dirty(struct drm_fb_helper *helper, + struct drm_clip_rect *clip) +{ + struct drm_device *dev = helper->dev; + int ret; + + /* Call damage handlers only if necessary */ + if (!(clip->x1 < clip->x2 && clip->y1 < clip->y2)) + return 0; + + if (helper->fb->funcs->dirty) { + ret = helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, clip, 1); + if (drm_WARN_ONCE(dev, ret, "Dirty helper failed: ret=%d\n", ret)) + return ret; + } + + return 0; +} + static const struct drm_fb_helper_funcs drm_fbdev_dma_helper_funcs = { .fb_probe = drm_fbdev_dma_helper_fb_probe, + .fb_dirty = drm_fbdev_dma_helper_fb_dirty, }; /*
Add support for damage handling and deferred I/O to fbdev-dma. This enables fbdev-dma to support all DMA-memory-based DRM drivers, even such with a dirty callback in their framebuffers. The patch adds the code for deferred I/O and also sets a dedicated helper for struct fb_ops.fb_mmap that support coherent mappings. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> --- drivers/gpu/drm/drm_fbdev_dma.c | 65 ++++++++++++++++++++++++++------- 1 file changed, 51 insertions(+), 14 deletions(-)