mbox series

[v5,00/15] follow_pfn and other iomap races

Message ID 20201030100815.2269-1-daniel.vetter@ffwll.ch
Headers show
Series follow_pfn and other iomap races | expand

Message

Daniel Vetter Oct. 30, 2020, 10:08 a.m. UTC
Hi all

Another update of my patch series to clamp down a bunch of races and gaps
around follow_pfn and other access to iomem mmaps. Previous version:

v1: https://lore.kernel.org/dri-devel/20201007164426.1812530-1-daniel.vetter@ffwll.ch/
v2: https://lore.kernel.org/dri-devel/20201009075934.3509076-1-daniel.vetter@ffwll.ch
v3: https://lore.kernel.org/dri-devel/20201021085655.1192025-1-daniel.vetter@ffwll.ch/
v4: https://lore.kernel.org/dri-devel/20201026105818.2585306-1-daniel.vetter@ffwll.ch/

And the discussion that sparked this journey:

https://lore.kernel.org/dri-devel/20201007164426.1812530-1-daniel.vetter@ffwll.ch/

Changes in v5:
- Tomasz found some issues in the media patches
- Polish suggested by Christoph for the unsafe_follow_pfn patch

Changes in v4:
- Drop the s390 patch, that was very stand-alone and now queued up to land
  through s390 trees.
- Comment polish per Dan's review.

Changes in v3:
- Bunch of polish all over, no functional changes aside from one barrier
  in the resource code, for consistency.
- A few more r-b tags.

Changes in v2:
- tons of small polish&fixes all over, thanks to all the reviewers who
  spotted issues
- I managed to test at least the generic_access_phys and pci mmap revoke
  stuff with a few gdb sessions using our i915 debug tools (hence now also
  the drm/i915 patch to properly request all the pci bar regions)
- reworked approach for the pci mmap revoke: Infrastructure moved into
  kernel/resource.c, address_space mapping is now set up at open time for
  everyone (which required some sysfs changes). Does indeed look a lot
  cleaner and a lot less invasive than I feared at first.

I feel like this is ready for some wider soaking. Since the remaining bits
are all kinda connnected probably simplest if it all goes through -mm.

Cheers, Daniel

Daniel Vetter (15):
  drm/exynos: Stop using frame_vector helpers
  drm/exynos: Use FOLL_LONGTERM for g2d cmdlists
  misc/habana: Stop using frame_vector helpers
  misc/habana: Use FOLL_LONGTERM for userptr
  mm/frame-vector: Use FOLL_LONGTERM
  media: videobuf2: Move frame_vector into media subsystem
  mm: Close race in generic_access_phys
  mm: Add unsafe_follow_pfn
  media/videbuf1|2: Mark follow_pfn usage as unsafe
  vfio/type1: Mark follow_pfn as unsafe
  PCI: Obey iomem restrictions for procfs mmap
  /dev/mem: Only set filp->f_mapping
  resource: Move devmem revoke code to resource framework
  sysfs: Support zapping of binary attr mmaps
  PCI: Revoke mappings like devmem

 drivers/char/mem.c                            |  86 +--------------
 drivers/gpu/drm/exynos/Kconfig                |   1 -
 drivers/gpu/drm/exynos/exynos_drm_g2d.c       |  48 ++++-----
 drivers/media/common/videobuf2/Kconfig        |   1 -
 drivers/media/common/videobuf2/Makefile       |   1 +
 .../media/common/videobuf2}/frame_vector.c    |  57 ++++------
 .../media/common/videobuf2/videobuf2-memops.c |   3 +-
 drivers/media/platform/omap/Kconfig           |   1 -
 drivers/media/v4l2-core/videobuf-dma-contig.c |   2 +-
 drivers/misc/habanalabs/Kconfig               |   1 -
 drivers/misc/habanalabs/common/habanalabs.h   |   6 +-
 drivers/misc/habanalabs/common/memory.c       |  50 ++++-----
 drivers/pci/pci-sysfs.c                       |   4 +
 drivers/pci/proc.c                            |   6 ++
 drivers/vfio/vfio_iommu_type1.c               |   4 +-
 fs/sysfs/file.c                               |  11 ++
 include/linux/ioport.h                        |   6 +-
 include/linux/mm.h                            |  47 +-------
 include/linux/sysfs.h                         |   2 +
 include/media/frame_vector.h                  |  47 ++++++++
 include/media/videobuf2-core.h                |   1 +
 kernel/resource.c                             | 101 +++++++++++++++++-
 mm/Kconfig                                    |   3 -
 mm/Makefile                                   |   1 -
 mm/memory.c                                   |  80 +++++++++++++-
 mm/nommu.c                                    |  27 ++++-
 security/Kconfig                              |  13 +++
 27 files changed, 360 insertions(+), 250 deletions(-)
 rename {mm => drivers/media/common/videobuf2}/frame_vector.c (84%)
 create mode 100644 include/media/frame_vector.h

Comments

Daniel Vetter Oct. 31, 2020, 2:45 p.m. UTC | #1
On Sat, Oct 31, 2020 at 3:55 AM John Hubbard <jhubbard@nvidia.com> wrote:
>

> On 10/30/20 3:08 AM, Daniel Vetter wrote:

> > This is used by media/videbuf2 for persistent dma mappings, not just

> > for a single dma operation and then freed again, so needs

> > FOLL_LONGTERM.

> >

> > Unfortunately current pup_locked doesn't support FOLL_LONGTERM due to

> > locking issues. Rework the code to pull the pup path out from the

> > mmap_sem critical section as suggested by Jason.

> >

> > By relying entirely on the vma checks in pin_user_pages and follow_pfn

>

> There are vma checks in pin_user_pages(), but this patch changes things

> to call pin_user_pages_fast(). And that does not have the vma checks.

> More below about this:

>

> > (for vm_flags and vma_is_fsdax) we can also streamline the code a lot.

> >

> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

> > Cc: Jason Gunthorpe <jgg@ziepe.ca>

> > Cc: Pawel Osciak <pawel@osciak.com>

> > Cc: Marek Szyprowski <m.szyprowski@samsung.com>

> > Cc: Kyungmin Park <kyungmin.park@samsung.com>

> > Cc: Tomasz Figa <tfiga@chromium.org>

> > Cc: Mauro Carvalho Chehab <mchehab@kernel.org>

> > Cc: Andrew Morton <akpm@linux-foundation.org>

> > Cc: John Hubbard <jhubbard@nvidia.com>

> > Cc: Jérôme Glisse <jglisse@redhat.com>

> > Cc: Jan Kara <jack@suse.cz>

> > Cc: Dan Williams <dan.j.williams@intel.com>

> > Cc: linux-mm@kvack.org

> > Cc: linux-arm-kernel@lists.infradead.org

> > Cc: linux-samsung-soc@vger.kernel.org

> > Cc: linux-media@vger.kernel.org

> > Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> > --

> > v2: Streamline the code and further simplify the loop checks (Jason)

> >

> > v5: Review from Tomasz:

> > - fix page counting for the follow_pfn case by resetting ret

> > - drop gup_flags paramater, now unused

> > ---

> >   .../media/common/videobuf2/videobuf2-memops.c |  3 +-

> >   include/linux/mm.h                            |  2 +-

> >   mm/frame_vector.c                             | 53 ++++++-------------

> >   3 files changed, 19 insertions(+), 39 deletions(-)

> >

> > diff --git a/drivers/media/common/videobuf2/videobuf2-memops.c b/drivers/media/common/videobuf2/videobuf2-memops.c

> > index 6e9e05153f4e..9dd6c27162f4 100644

> > --- a/drivers/media/common/videobuf2/videobuf2-memops.c

> > +++ b/drivers/media/common/videobuf2/videobuf2-memops.c

> > @@ -40,7 +40,6 @@ struct frame_vector *vb2_create_framevec(unsigned long start,

> >       unsigned long first, last;

> >       unsigned long nr;

> >       struct frame_vector *vec;

> > -     unsigned int flags = FOLL_FORCE | FOLL_WRITE;

> >

> >       first = start >> PAGE_SHIFT;

> >       last = (start + length - 1) >> PAGE_SHIFT;

> > @@ -48,7 +47,7 @@ struct frame_vector *vb2_create_framevec(unsigned long start,

> >       vec = frame_vector_create(nr);

> >       if (!vec)

> >               return ERR_PTR(-ENOMEM);

> > -     ret = get_vaddr_frames(start & PAGE_MASK, nr, flags, vec);

> > +     ret = get_vaddr_frames(start & PAGE_MASK, nr, vec);

> >       if (ret < 0)

> >               goto out_destroy;

> >       /* We accept only complete set of PFNs */

> > diff --git a/include/linux/mm.h b/include/linux/mm.h

> > index ef360fe70aaf..d6b8e30dce2e 100644

> > --- a/include/linux/mm.h

> > +++ b/include/linux/mm.h

> > @@ -1765,7 +1765,7 @@ struct frame_vector {

> >   struct frame_vector *frame_vector_create(unsigned int nr_frames);

> >   void frame_vector_destroy(struct frame_vector *vec);

> >   int get_vaddr_frames(unsigned long start, unsigned int nr_pfns,

> > -                  unsigned int gup_flags, struct frame_vector *vec);

> > +                  struct frame_vector *vec);

> >   void put_vaddr_frames(struct frame_vector *vec);

> >   int frame_vector_to_pages(struct frame_vector *vec);

> >   void frame_vector_to_pfns(struct frame_vector *vec);

> > diff --git a/mm/frame_vector.c b/mm/frame_vector.c

> > index 10f82d5643b6..f8c34b895c76 100644

> > --- a/mm/frame_vector.c

> > +++ b/mm/frame_vector.c

> > @@ -32,13 +32,12 @@

> >    * This function takes care of grabbing mmap_lock as necessary.

> >    */

> >   int get_vaddr_frames(unsigned long start, unsigned int nr_frames,

> > -                  unsigned int gup_flags, struct frame_vector *vec)

> > +                  struct frame_vector *vec)

> >   {

> >       struct mm_struct *mm = current->mm;

> >       struct vm_area_struct *vma;

> >       int ret = 0;

> >       int err;

> > -     int locked;

> >

> >       if (nr_frames == 0)

> >               return 0;

> > @@ -48,40 +47,26 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,

> >

> >       start = untagged_addr(start);

> >

> > -     mmap_read_lock(mm);

> > -     locked = 1;

> > -     vma = find_vma_intersection(mm, start, start + 1);

> > -     if (!vma) {

> > -             ret = -EFAULT;

> > -             goto out;

> > -     }

> > -

> > -     /*

> > -      * While get_vaddr_frames() could be used for transient (kernel

> > -      * controlled lifetime) pinning of memory pages all current

> > -      * users establish long term (userspace controlled lifetime)

> > -      * page pinning. Treat get_vaddr_frames() like

> > -      * get_user_pages_longterm() and disallow it for filesystem-dax

> > -      * mappings.

> > -      */

> > -     if (vma_is_fsdax(vma)) {

> > -             ret = -EOPNOTSUPP;

> > -             goto out;

> > -     }

> > -

> > -     if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) {

>

> By removing this check from this location, and changing from

> pin_user_pages_locked() to pin_user_pages_fast(), I *think* we end up

> losing the check entirely. Is that intended? If so it could use a comment

> somewhere to explain why.


Yeah this wasn't intentional. I think I needed to drop the _locked
version to prep for FOLL_LONGTERM, and figured _fast is always better.
But I didn't realize that _fast doesn't have the vma checks, gup.c got
me a bit confused.

I'll remedy this in all the patches where this applies (because a
VM_IO | VM_PFNMAP can point at struct page backed memory, and that
exact use-case is what we want to stop with the unsafe_follow_pfn work
since it wreaks things like cma or security).

Aside: I do wonder whether the lack for that check isn't a problem.
VM_IO | VM_PFNMAP generally means driver managed, which means the
driver isn't going to consult the page pin count or anything like that
(at least not necessarily) when revoking or moving that memory, since
we're assuming it's totally under driver control. So if pup_fast can
get into such a mapping, we might have a problem.
-Daniel

> thanks,

> --

> John Hubbard

> NVIDIA

>

> > +     ret = pin_user_pages_fast(start, nr_frames,

> > +                               FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,

> > +                               (struct page **)(vec->ptrs));

> > +     if (ret > 0) {

> >               vec->got_ref = true;

> >               vec->is_pfns = false;

> > -             ret = pin_user_pages_locked(start, nr_frames,

> > -                     gup_flags, (struct page **)(vec->ptrs), &locked);

> > -             goto out;

> > +             goto out_unlocked;

> >       }

> >

> > +     mmap_read_lock(mm);

> >       vec->got_ref = false;

> >       vec->is_pfns = true;

> > +     ret = 0;

> >       do {

> >               unsigned long *nums = frame_vector_pfns(vec);

> >

> > +             vma = find_vma_intersection(mm, start, start + 1);

> > +             if (!vma)

> > +                     break;

> > +

> >               while (ret < nr_frames && start + PAGE_SIZE <= vma->vm_end) {

> >                       err = follow_pfn(vma, start, &nums[ret]);

> >                       if (err) {

> > @@ -92,17 +77,13 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,

> >                       start += PAGE_SIZE;

> >                       ret++;

> >               }

> > -             /*

> > -              * We stop if we have enough pages or if VMA doesn't completely

> > -              * cover the tail page.

> > -              */

> > -             if (ret >= nr_frames || start < vma->vm_end)

> > +             /* Bail out if VMA doesn't completely cover the tail page. */

> > +             if (start < vma->vm_end)

> >                       break;

> > -             vma = find_vma_intersection(mm, start, start + 1);

> > -     } while (vma && vma->vm_flags & (VM_IO | VM_PFNMAP));

> > +     } while (ret < nr_frames);

> >   out:

> > -     if (locked)

> > -             mmap_read_unlock(mm);

> > +     mmap_read_unlock(mm);

> > +out_unlocked:

> >       if (!ret)

> >               ret = -EFAULT;

> >       if (ret > 0)

> >

>

>



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Tomasz Figa Nov. 2, 2020, 6:19 p.m. UTC | #2
On Fri, Oct 30, 2020 at 3:38 PM Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
>

> On Fri, Oct 30, 2020 at 3:11 PM Tomasz Figa <tfiga@chromium.org> wrote:

> >

> > On Fri, Oct 30, 2020 at 11:08 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote:

> > >

> > > This is used by media/videbuf2 for persistent dma mappings, not just

> > > for a single dma operation and then freed again, so needs

> > > FOLL_LONGTERM.

> > >

> > > Unfortunately current pup_locked doesn't support FOLL_LONGTERM due to

> > > locking issues. Rework the code to pull the pup path out from the

> > > mmap_sem critical section as suggested by Jason.

> > >

> > > By relying entirely on the vma checks in pin_user_pages and follow_pfn

> > > (for vm_flags and vma_is_fsdax) we can also streamline the code a lot.

> > >

> > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

> > > Cc: Jason Gunthorpe <jgg@ziepe.ca>

> > > Cc: Pawel Osciak <pawel@osciak.com>

> > > Cc: Marek Szyprowski <m.szyprowski@samsung.com>

> > > Cc: Kyungmin Park <kyungmin.park@samsung.com>

> > > Cc: Tomasz Figa <tfiga@chromium.org>

> > > Cc: Mauro Carvalho Chehab <mchehab@kernel.org>

> > > Cc: Andrew Morton <akpm@linux-foundation.org>

> > > Cc: John Hubbard <jhubbard@nvidia.com>

> > > Cc: Jérôme Glisse <jglisse@redhat.com>

> > > Cc: Jan Kara <jack@suse.cz>

> > > Cc: Dan Williams <dan.j.williams@intel.com>

> > > Cc: linux-mm@kvack.org

> > > Cc: linux-arm-kernel@lists.infradead.org

> > > Cc: linux-samsung-soc@vger.kernel.org

> > > Cc: linux-media@vger.kernel.org

> > > Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> > > --

> > > v2: Streamline the code and further simplify the loop checks (Jason)

> > >

> > > v5: Review from Tomasz:

> > > - fix page counting for the follow_pfn case by resetting ret

> > > - drop gup_flags paramater, now unused

> > > ---

> > >  .../media/common/videobuf2/videobuf2-memops.c |  3 +-

> > >  include/linux/mm.h                            |  2 +-

> > >  mm/frame_vector.c                             | 53 ++++++-------------

> > >  3 files changed, 19 insertions(+), 39 deletions(-)

> > >

> >

> > Thanks, looks good to me now.

> >

> > Acked-by: Tomasz Figa <tfiga@chromium.org>

> >

> > From reading the code, this is quite unlikely to introduce any

> > behavior changes, but just to be safe, did you have a chance to test

> > this with some V4L2 driver?

>

> Nah, unfortunately not.


I believe we don't have any setup that could exercise the IO/PFNMAP
user pointers, but it should be possible to exercise the basic userptr
path by enabling the virtual (fake) video driver, vivid or
CONFIG_VIDEO_VIVID, in your kernel and then using yavta [1] with
--userptr and --capture=<number of frames> (and possibly some more
options) to grab a couple of frames from the test pattern generator.

Does it sound like something that you could give a try? Feel free to
ping me on IRC (tfiga on #v4l or #dri-devel) if you need any help.

[1] https://git.ideasonboard.org/yavta.git

Best regards,
Tomasz

> -Daniel

>

> >

> > Best regards,

> > Tomasz

> >

> > > diff --git a/drivers/media/common/videobuf2/videobuf2-memops.c b/drivers/media/common/videobuf2/videobuf2-memops.c

> > > index 6e9e05153f4e..9dd6c27162f4 100644

> > > --- a/drivers/media/common/videobuf2/videobuf2-memops.c

> > > +++ b/drivers/media/common/videobuf2/videobuf2-memops.c

> > > @@ -40,7 +40,6 @@ struct frame_vector *vb2_create_framevec(unsigned long start,

> > >         unsigned long first, last;

> > >         unsigned long nr;

> > >         struct frame_vector *vec;

> > > -       unsigned int flags = FOLL_FORCE | FOLL_WRITE;

> > >

> > >         first = start >> PAGE_SHIFT;

> > >         last = (start + length - 1) >> PAGE_SHIFT;

> > > @@ -48,7 +47,7 @@ struct frame_vector *vb2_create_framevec(unsigned long start,

> > >         vec = frame_vector_create(nr);

> > >         if (!vec)

> > >                 return ERR_PTR(-ENOMEM);

> > > -       ret = get_vaddr_frames(start & PAGE_MASK, nr, flags, vec);

> > > +       ret = get_vaddr_frames(start & PAGE_MASK, nr, vec);

> > >         if (ret < 0)

> > >                 goto out_destroy;

> > >         /* We accept only complete set of PFNs */

> > > diff --git a/include/linux/mm.h b/include/linux/mm.h

> > > index ef360fe70aaf..d6b8e30dce2e 100644

> > > --- a/include/linux/mm.h

> > > +++ b/include/linux/mm.h

> > > @@ -1765,7 +1765,7 @@ struct frame_vector {

> > >  struct frame_vector *frame_vector_create(unsigned int nr_frames);

> > >  void frame_vector_destroy(struct frame_vector *vec);

> > >  int get_vaddr_frames(unsigned long start, unsigned int nr_pfns,

> > > -                    unsigned int gup_flags, struct frame_vector *vec);

> > > +                    struct frame_vector *vec);

> > >  void put_vaddr_frames(struct frame_vector *vec);

> > >  int frame_vector_to_pages(struct frame_vector *vec);

> > >  void frame_vector_to_pfns(struct frame_vector *vec);

> > > diff --git a/mm/frame_vector.c b/mm/frame_vector.c

> > > index 10f82d5643b6..f8c34b895c76 100644

> > > --- a/mm/frame_vector.c

> > > +++ b/mm/frame_vector.c

> > > @@ -32,13 +32,12 @@

> > >   * This function takes care of grabbing mmap_lock as necessary.

> > >   */

> > >  int get_vaddr_frames(unsigned long start, unsigned int nr_frames,

> > > -                    unsigned int gup_flags, struct frame_vector *vec)

> > > +                    struct frame_vector *vec)

> > >  {

> > >         struct mm_struct *mm = current->mm;

> > >         struct vm_area_struct *vma;

> > >         int ret = 0;

> > >         int err;

> > > -       int locked;

> > >

> > >         if (nr_frames == 0)

> > >                 return 0;

> > > @@ -48,40 +47,26 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,

> > >

> > >         start = untagged_addr(start);

> > >

> > > -       mmap_read_lock(mm);

> > > -       locked = 1;

> > > -       vma = find_vma_intersection(mm, start, start + 1);

> > > -       if (!vma) {

> > > -               ret = -EFAULT;

> > > -               goto out;

> > > -       }

> > > -

> > > -       /*

> > > -        * While get_vaddr_frames() could be used for transient (kernel

> > > -        * controlled lifetime) pinning of memory pages all current

> > > -        * users establish long term (userspace controlled lifetime)

> > > -        * page pinning. Treat get_vaddr_frames() like

> > > -        * get_user_pages_longterm() and disallow it for filesystem-dax

> > > -        * mappings.

> > > -        */

> > > -       if (vma_is_fsdax(vma)) {

> > > -               ret = -EOPNOTSUPP;

> > > -               goto out;

> > > -       }

> > > -

> > > -       if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) {

> > > +       ret = pin_user_pages_fast(start, nr_frames,

> > > +                                 FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,

> > > +                                 (struct page **)(vec->ptrs));

> > > +       if (ret > 0) {

> > >                 vec->got_ref = true;

> > >                 vec->is_pfns = false;

> > > -               ret = pin_user_pages_locked(start, nr_frames,

> > > -                       gup_flags, (struct page **)(vec->ptrs), &locked);

> > > -               goto out;

> > > +               goto out_unlocked;

> > >         }

> > >

> > > +       mmap_read_lock(mm);

> > >         vec->got_ref = false;

> > >         vec->is_pfns = true;

> > > +       ret = 0;

> > >         do {

> > >                 unsigned long *nums = frame_vector_pfns(vec);

> > >

> > > +               vma = find_vma_intersection(mm, start, start + 1);

> > > +               if (!vma)

> > > +                       break;

> > > +

> > >                 while (ret < nr_frames && start + PAGE_SIZE <= vma->vm_end) {

> > >                         err = follow_pfn(vma, start, &nums[ret]);

> > >                         if (err) {

> > > @@ -92,17 +77,13 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,

> > >                         start += PAGE_SIZE;

> > >                         ret++;

> > >                 }

> > > -               /*

> > > -                * We stop if we have enough pages or if VMA doesn't completely

> > > -                * cover the tail page.

> > > -                */

> > > -               if (ret >= nr_frames || start < vma->vm_end)

> > > +               /* Bail out if VMA doesn't completely cover the tail page. */

> > > +               if (start < vma->vm_end)

> > >                         break;

> > > -               vma = find_vma_intersection(mm, start, start + 1);

> > > -       } while (vma && vma->vm_flags & (VM_IO | VM_PFNMAP));

> > > +       } while (ret < nr_frames);

> > >  out:

> > > -       if (locked)

> > > -               mmap_read_unlock(mm);

> > > +       mmap_read_unlock(mm);

> > > +out_unlocked:

> > >         if (!ret)

> > >                 ret = -EFAULT;

> > >         if (ret > 0)

> > > --

> > > 2.28.0

> > >

> > _______________________________________________

> > dri-devel mailing list

> > dri-devel@lists.freedesktop.org

> > https://lists.freedesktop.org/mailman/listinfo/dri-devel

>

>

>

> --

> Daniel Vetter

> Software Engineer, Intel Corporation

> http://blog.ffwll.ch
Jason Gunthorpe Nov. 4, 2020, 2 p.m. UTC | #3
On Sun, Nov 01, 2020 at 11:50:39PM +0100, Daniel Vetter wrote:

> It's not device drivers, but everyone else. At least my understanding
> is that VM_IO | VM_PFNMAP means "even if it happens to be backed by a
> struct page, do not treat it like normal memory". And gup/pup_fast
> happily break that. I tried to chase the history of that test, didn't
> turn up anything I understood much:

VM_IO isn't suppose do thave struct pages, so how can gup_fast return
them?

I thought some magic in the PTE flags excluded this?

Jason
Christoph Hellwig Nov. 4, 2020, 4:21 p.m. UTC | #4
On Wed, Nov 04, 2020 at 04:54:19PM +0100, Daniel Vetter wrote:
> I don't really have a box here, but dma_mmap_attrs() and friends to

> mmap dma_alloc_coherent memory is set up as VM_IO | VM_PFNMAP (it's

> actually enforced since underneath it uses remap_pfn_range), and

> usually (except if it's pre-cma carveout) that's just normal struct

> page backed memory. Sometimes from a cma region (so will be caught by

> the cma page check), but if you have an iommu to make it

> device-contiguous, that's not needed.


dma_mmap_* memory may or may not be page backed, but it absolutely
must not be resolved by get_user_pages and friends as it is special.
So yes, not being able to get a struct page back from such an mmap is
a feature.
Daniel Vetter Nov. 4, 2020, 4:26 p.m. UTC | #5
On Wed, Nov 4, 2020 at 5:21 PM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Wed, Nov 04, 2020 at 04:54:19PM +0100, Daniel Vetter wrote:
> > I don't really have a box here, but dma_mmap_attrs() and friends to
> > mmap dma_alloc_coherent memory is set up as VM_IO | VM_PFNMAP (it's
> > actually enforced since underneath it uses remap_pfn_range), and
> > usually (except if it's pre-cma carveout) that's just normal struct
> > page backed memory. Sometimes from a cma region (so will be caught by
> > the cma page check), but if you have an iommu to make it
> > device-contiguous, that's not needed.
>
> dma_mmap_* memory may or may not be page backed, but it absolutely
> must not be resolved by get_user_pages and friends as it is special.
> So yes, not being able to get a struct page back from such an mmap is
> a feature.

Yes, that's clear.

What we're discussing is whether gup_fast and pup_fast also obey this,
or fall over and can give you the struct page that's backing the
dma_mmap_* memory. Since the _fast variant doesn't check for
vma->vm_flags, and afaict that's the only thing which closes this gap.
And like you restate, that would be a bit a problem. So where's that
check which Jason&me aren't spotting?
-Daniel
Christoph Hellwig Nov. 4, 2020, 4:37 p.m. UTC | #6
On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote:
> What we're discussing is whether gup_fast and pup_fast also obey this,

> or fall over and can give you the struct page that's backing the

> dma_mmap_* memory. Since the _fast variant doesn't check for

> vma->vm_flags, and afaict that's the only thing which closes this gap.

> And like you restate, that would be a bit a problem. So where's that

> check which Jason&me aren't spotting?


remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range
errors out on pte_special.  Of course this only works for the
CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have
a real problem.
Christoph Hellwig Nov. 4, 2020, 4:41 p.m. UTC | #7
On Wed, Nov 04, 2020 at 04:37:58PM +0000, Christoph Hellwig wrote:
> On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote:

> > What we're discussing is whether gup_fast and pup_fast also obey this,

> > or fall over and can give you the struct page that's backing the

> > dma_mmap_* memory. Since the _fast variant doesn't check for

> > vma->vm_flags, and afaict that's the only thing which closes this gap.

> > And like you restate, that would be a bit a problem. So where's that

> > check which Jason&me aren't spotting?

> 

> remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range

> errors out on pte_special.  Of course this only works for the

> CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have

> a real problem.


Except that we don't really support pte-level gup-fast without
CONFIG_ARCH_HAS_PTE_SPECIAL, and in fact all architectures selecting
HAVE_FAST_GUP also select ARCH_HAS_PTE_SPECIAL, so we should be fine.
John Hubbard Nov. 4, 2020, 6:44 p.m. UTC | #8
On 11/4/20 10:17 AM, Jason Gunthorpe wrote:
> On Wed, Nov 04, 2020 at 04:41:19PM +0000, Christoph Hellwig wrote:

>> On Wed, Nov 04, 2020 at 04:37:58PM +0000, Christoph Hellwig wrote:

>>> On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote:

>>>> What we're discussing is whether gup_fast and pup_fast also obey this,

>>>> or fall over and can give you the struct page that's backing the

>>>> dma_mmap_* memory. Since the _fast variant doesn't check for

>>>> vma->vm_flags, and afaict that's the only thing which closes this gap.

>>>> And like you restate, that would be a bit a problem. So where's that

>>>> check which Jason&me aren't spotting?

>>>

>>> remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range

>>> errors out on pte_special.  Of course this only works for the

>>> CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have

>>> a real problem.

>>

>> Except that we don't really support pte-level gup-fast without

>> CONFIG_ARCH_HAS_PTE_SPECIAL, and in fact all architectures selecting

>> HAVE_FAST_GUP also select ARCH_HAS_PTE_SPECIAL, so we should be fine.

> 

> Mm, I thought it was probably the special flag..

> 

> Knowing that CONFIG_HAVE_FAST_GUP can't be set without

> CONFIG_ARCH_HAS_PTE_SPECIAL is pretty insightful, can we put that in

> the Kconfig?

> 

> config HAVE_FAST_GUP

>          depends on MMU

>          depends on ARCH_HAS_PTE_SPECIAL

>          bool

> 

Well, the !CONFIG_ARCH_HAS_PTE_SPECIAL case points out in a comment that
gup-fast is not *completely* unavailable there, so I don't think you want
to shut it off like that:

/*
  * If we can't determine whether or not a pte is special, then fail immediately
  * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not
  * to be special.
  *
  * For a futex to be placed on a THP tail page, get_futex_key requires a
  * get_user_pages_fast_only implementation that can pin pages. Thus it's still
  * useful to have gup_huge_pmd even if we can't operate on ptes.
  */


thanks,
-- 
John Hubbard
NVIDIA
Jason Gunthorpe Nov. 4, 2020, 7:02 p.m. UTC | #9
On Wed, Nov 04, 2020 at 10:44:56AM -0800, John Hubbard wrote:
> On 11/4/20 10:17 AM, Jason Gunthorpe wrote:

> > On Wed, Nov 04, 2020 at 04:41:19PM +0000, Christoph Hellwig wrote:

> > > On Wed, Nov 04, 2020 at 04:37:58PM +0000, Christoph Hellwig wrote:

> > > > On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote:

> > > > > What we're discussing is whether gup_fast and pup_fast also obey this,

> > > > > or fall over and can give you the struct page that's backing the

> > > > > dma_mmap_* memory. Since the _fast variant doesn't check for

> > > > > vma->vm_flags, and afaict that's the only thing which closes this gap.

> > > > > And like you restate, that would be a bit a problem. So where's that

> > > > > check which Jason&me aren't spotting?

> > > > 

> > > > remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range

> > > > errors out on pte_special.  Of course this only works for the

> > > > CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have

> > > > a real problem.

> > > 

> > > Except that we don't really support pte-level gup-fast without

> > > CONFIG_ARCH_HAS_PTE_SPECIAL, and in fact all architectures selecting

> > > HAVE_FAST_GUP also select ARCH_HAS_PTE_SPECIAL, so we should be fine.

> > 

> > Mm, I thought it was probably the special flag..

> > 

> > Knowing that CONFIG_HAVE_FAST_GUP can't be set without

> > CONFIG_ARCH_HAS_PTE_SPECIAL is pretty insightful, can we put that in

> > the Kconfig?

> > 

> > config HAVE_FAST_GUP

> >          depends on MMU

> >          depends on ARCH_HAS_PTE_SPECIAL

> >          bool

> > 

> Well, the !CONFIG_ARCH_HAS_PTE_SPECIAL case points out in a comment that

> gup-fast is not *completely* unavailable there, so I don't think you want

> to shut it off like that:

> 

> /*

>  * If we can't determine whether or not a pte is special, then fail immediately

>  * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not

>  * to be special.

>  *

>  * For a futex to be placed on a THP tail page, get_futex_key requires a

>  * get_user_pages_fast_only implementation that can pin pages. Thus it's still

>  * useful to have gup_huge_pmd even if we can't operate on ptes.

>  */


I saw that once and I really couldn't make sense of it..
What use is having futex's that only work on THP pages? Confused

CH said there was no case of HAVE_FAST_GUP !ARCH_HAS_PTE_SPECIAL, is
one hidden someplace then?

Jason
Jason Gunthorpe Nov. 5, 2020, 12:49 p.m. UTC | #10
On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote:
> > /*
> >  * If we can't determine whether or not a pte is special, then fail immediately
> >  * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not
> >  * to be special.
> >  *
> >  * For a futex to be placed on a THP tail page, get_futex_key requires a
> >  * get_user_pages_fast_only implementation that can pin pages. Thus it's still
> >  * useful to have gup_huge_pmd even if we can't operate on ptes.
> >  */
> 
> We support hugepage faults in gpu drivers since recently, and I'm not
> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably
> just me missing something again.

It means ioremap can't create an IO page PUD, it has to be broken up.

Does ioremap even create anything larger than PTEs?

Jason
John Hubbard Nov. 6, 2020, 4:08 a.m. UTC | #11
On 11/5/20 4:49 AM, Jason Gunthorpe wrote:
> On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote:
>>> /*
>>>   * If we can't determine whether or not a pte is special, then fail immediately
>>>   * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not
>>>   * to be special.
>>>   *
>>>   * For a futex to be placed on a THP tail page, get_futex_key requires a
>>>   * get_user_pages_fast_only implementation that can pin pages. Thus it's still
>>>   * useful to have gup_huge_pmd even if we can't operate on ptes.
>>>   */
>>
>> We support hugepage faults in gpu drivers since recently, and I'm not
>> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably
>> just me missing something again.
> 
> It means ioremap can't create an IO page PUD, it has to be broken up.
> 
> Does ioremap even create anything larger than PTEs?
> 

 From my reading, yes. See ioremap_try_huge_pmd().

thanks,
Daniel Vetter Nov. 6, 2020, 10:27 a.m. UTC | #12
On Fri, Nov 6, 2020 at 11:01 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Fri, Nov 6, 2020 at 5:08 AM John Hubbard <jhubbard@nvidia.com> wrote:
> >
> > On 11/5/20 4:49 AM, Jason Gunthorpe wrote:
> > > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote:
> > >>> /*
> > >>>   * If we can't determine whether or not a pte is special, then fail immediately
> > >>>   * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not
> > >>>   * to be special.
> > >>>   *
> > >>>   * For a futex to be placed on a THP tail page, get_futex_key requires a
> > >>>   * get_user_pages_fast_only implementation that can pin pages. Thus it's still
> > >>>   * useful to have gup_huge_pmd even if we can't operate on ptes.
> > >>>   */
> > >>
> > >> We support hugepage faults in gpu drivers since recently, and I'm not
> > >> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably
> > >> just me missing something again.
> > >
> > > It means ioremap can't create an IO page PUD, it has to be broken up.
> > >
> > > Does ioremap even create anything larger than PTEs?
>
> gpu drivers also tend to use vmf_insert_pfn* directly, so we can do
> on-demand paging and move buffers around. From what I glanced for
> lowest level we to the pte_mkspecial correctly (I think I convinced
> myself that vm_insert_pfn does that), but for pud/pmd levels it seems
> just yolo.

So I dug around a bit more and ttm sets PFN_DEV | PFN_MAP to get past
the various pft_t_devmap checks (see e.g. vmf_insert_pfn_pmd_prot()).
x86-64 has ARCH_HAS_PTE_DEVMAP, and gup.c seems to handle these
specially, but frankly I got totally lost in what this does.

The comment above the pfn_t_devmap check makes me wonder whether doing
this is correct or not.

Also adding Thomas Hellstrom, who implemented the huge map support in ttm.
-Daniel

> remap_pfn_range seems to indeed split down to pte level always.
>
> >  From my reading, yes. See ioremap_try_huge_pmd().
>
> The ioremap here shouldn't matter, since this is for kernel-internal
> mappings. So that's all fine I think.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Jason Gunthorpe Nov. 6, 2020, 12:55 p.m. UTC | #13
On Fri, Nov 06, 2020 at 11:27:59AM +0100, Daniel Vetter wrote:
> On Fri, Nov 6, 2020 at 11:01 AM Daniel Vetter <daniel@ffwll.ch> wrote:

> >

> > On Fri, Nov 6, 2020 at 5:08 AM John Hubbard <jhubbard@nvidia.com> wrote:

> > >

> > > On 11/5/20 4:49 AM, Jason Gunthorpe wrote:

> > > > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote:

> > > >>> /*

> > > >>>   * If we can't determine whether or not a pte is special, then fail immediately

> > > >>>   * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not

> > > >>>   * to be special.

> > > >>>   *

> > > >>>   * For a futex to be placed on a THP tail page, get_futex_key requires a

> > > >>>   * get_user_pages_fast_only implementation that can pin pages. Thus it's still

> > > >>>   * useful to have gup_huge_pmd even if we can't operate on ptes.

> > > >>>   */

> > > >>

> > > >> We support hugepage faults in gpu drivers since recently, and I'm not

> > > >> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably

> > > >> just me missing something again.

> > > >

> > > > It means ioremap can't create an IO page PUD, it has to be broken up.

> > > >

> > > > Does ioremap even create anything larger than PTEs?

> >

> > gpu drivers also tend to use vmf_insert_pfn* directly, so we can do

> > on-demand paging and move buffers around. From what I glanced for

> > lowest level we to the pte_mkspecial correctly (I think I convinced

> > myself that vm_insert_pfn does that), but for pud/pmd levels it seems

> > just yolo.

> 

> So I dug around a bit more and ttm sets PFN_DEV | PFN_MAP to get past

> the various pft_t_devmap checks (see e.g. vmf_insert_pfn_pmd_prot()).

> x86-64 has ARCH_HAS_PTE_DEVMAP, and gup.c seems to handle these

> specially, but frankly I got totally lost in what this does.


The fact vmf_insert_pfn_pmd_prot() has all those BUG_ON's to prevent
putting VM_PFNMAP pages into the page tables seems like a big red
flag.

The comment seems to confirm what we are talking about here:

	/*
	 * If we had pmd_special, we could avoid all these restrictions,
	 * but we need to be consistent with PTEs and architectures that
	 * can't support a 'special' bit.
	 */

ie without the ability to mark special we can't block fast gup and
anyone who does O_DIRECT on these ranges will crash the kernel when it
tries to convert a IO page into a struct page.

Should be easy enough to directly test?

Putting non-struct page PTEs into a VMA without setting VM_PFNMAP just
seems horribly wrong to me.

Jason
Jason Gunthorpe Nov. 6, 2020, 12:58 p.m. UTC | #14
On Fri, Nov 06, 2020 at 11:01:57AM +0100, Daniel Vetter wrote:

> gpu drivers also tend to use vmf_insert_pfn* directly, so we can do
> on-demand paging and move buffers around. From what I glanced for
> lowest level we to the pte_mkspecial correctly (I think I convinced
> myself that vm_insert_pfn does that), but for pud/pmd levels it seems
> just yolo.
> 
> remap_pfn_range seems to indeed split down to pte level always.

Thats what it looked like to me too.
 
> >  From my reading, yes. See ioremap_try_huge_pmd().
> 
> The ioremap here shouldn't matter, since this is for kernel-internal
> mappings. So that's all fine I think.

Right, sorry to be unclear, we are talking about io_remap_pfn_range()
which is for userspace mappings in VMAs

Jason
Thomas Hellström Nov. 9, 2020, 8:44 a.m. UTC | #15
On Fri, 2020-11-06 at 08:55 -0400, Jason Gunthorpe wrote:
> On Fri, Nov 06, 2020 at 11:27:59AM +0100, Daniel Vetter wrote:
> > On Fri, Nov 6, 2020 at 11:01 AM Daniel Vetter <daniel@ffwll.ch>
> > wrote:
> > > On Fri, Nov 6, 2020 at 5:08 AM John Hubbard <jhubbard@nvidia.com>
> > > wrote:
> > > > On 11/5/20 4:49 AM, Jason Gunthorpe wrote:
> > > > > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter
> > > > > wrote:
> > > > > > > /*
> > > > > > >   * If we can't determine whether or not a pte is
> > > > > > > special, then fail immediately
> > > > > > >   * for ptes. Note, we can still pin HugeTLB and THP as
> > > > > > > these are guaranteed not
> > > > > > >   * to be special.
> > > > > > >   *
> > > > > > >   * For a futex to be placed on a THP tail page,
> > > > > > > get_futex_key requires a
> > > > > > >   * get_user_pages_fast_only implementation that can pin
> > > > > > > pages. Thus it's still
> > > > > > >   * useful to have gup_huge_pmd even if we can't operate
> > > > > > > on ptes.
> > > > > > >   */
> > > > > > 
> > > > > > We support hugepage faults in gpu drivers since recently,
> > > > > > and I'm not
> > > > > > seeing a pud_mkhugespecial anywhere. So not sure this
> > > > > > works, but probably
> > > > > > just me missing something again.
> > > > > 
> > > > > It means ioremap can't create an IO page PUD, it has to be
> > > > > broken up.
> > > > > 
> > > > > Does ioremap even create anything larger than PTEs?
> > > 
> > > gpu drivers also tend to use vmf_insert_pfn* directly, so we can
> > > do
> > > on-demand paging and move buffers around. From what I glanced for
> > > lowest level we to the pte_mkspecial correctly (I think I
> > > convinced
> > > myself that vm_insert_pfn does that), but for pud/pmd levels it
> > > seems
> > > just yolo.
> > 
> > So I dug around a bit more and ttm sets PFN_DEV | PFN_MAP to get
> > past
> > the various pft_t_devmap checks (see e.g.
> > vmf_insert_pfn_pmd_prot()).
> > x86-64 has ARCH_HAS_PTE_DEVMAP, and gup.c seems to handle these
> > specially, but frankly I got totally lost in what this does.
> 
> The fact vmf_insert_pfn_pmd_prot() has all those BUG_ON's to prevent
> putting VM_PFNMAP pages into the page tables seems like a big red
> flag.
> 
> The comment seems to confirm what we are talking about here:
> 
> 	/*
> 	 * If we had pmd_special, we could avoid all these
> restrictions,
> 	 * but we need to be consistent with PTEs and architectures
> that
> 	 * can't support a 'special' bit.
> 	 */
> 
> ie without the ability to mark special we can't block fast gup and
> anyone who does O_DIRECT on these ranges will crash the kernel when
> it
> tries to convert a IO page into a struct page.
> 
> Should be easy enough to directly test?
> 
> Putting non-struct page PTEs into a VMA without setting VM_PFNMAP
> just
> seems horribly wrong to me.

Although core mm special huge-page support is currently quite limited,
some time ago, I extended the pre-existing vma_is_dax() to
vma_is_special_huge():

/**
 * vma_is_special_huge - Are transhuge page-table entries considered
special?
 * @vma: Pointer to the struct vm_area_struct to consider
 *
 * Whether transhuge page-table entries are considered "special"
following
 * the definition in vm_normal_page().
 *
 * Return: true if transhuge page-table entries should be considered
special,
 * false otherwise.
 */
static inline bool vma_is_special_huge(const struct vm_area_struct
*vma)
{
	return vma_is_dax(vma) || (vma->vm_file &&
				   (vma->vm_flags & (VM_PFNMAP |
VM_MIXEDMAP)));
}

meaning that currently all transhuge page-table-entries in a PFNMAP or
MIXEDMAP vma are considered "special". The number of calls to this
function (mainly in the page-splitting code) is quite limited so
replacing it with a more elaborate per-page-table-entry scheme would, I
guess, definitely be possible. Although all functions using it would
need to require a fallback path for architectures not supporting it.

/Thomas



> 
> Jason