diff mbox series

[RFC] mm, oom_reaper: gather each vma to prevent leaking TLB entry

Message ID 20171106033651.172368-1-wangnan0@huawei.com
State Superseded
Headers show
Series [RFC] mm, oom_reaper: gather each vma to prevent leaking TLB entry | expand

Commit Message

Wang Nan Nov. 6, 2017, 3:36 a.m. UTC
tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.
In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush
TLB when tlb->fullmm is true:

  commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

Which makes leaking of tlb entries. For example, when oom_reaper
selects a task and reaps its virtual memory space, another thread
in this task group may still running on another core and access
these already freed memory through tlb entries.

This patch gather each vma instead of gathering full vm space,
tlb->fullmm is not true. The behavior of oom reaper become similar
to munmapping before do_exit, which should be safe for all archs.

Signed-off-by: Wang Nan <wangnan0@huawei.com>

Cc: Bob Liu <liubo95@huawei.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Andrea Arcangeli <aarcange@redhat.com>
---
 mm/oom_kill.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

-- 
2.10.1

Comments

Bob Liu Nov. 6, 2017, 7:04 a.m. UTC | #1
On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:
> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> TLB when tlb->fullmm is true:

>

>   commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

>


CC'ed Will Deacon.

> Which makes leaking of tlb entries. For example, when oom_reaper

> selects a task and reaps its virtual memory space, another thread

> in this task group may still running on another core and access

> these already freed memory through tlb entries.

>

> This patch gather each vma instead of gathering full vm space,

> tlb->fullmm is not true. The behavior of oom reaper become similar

> to munmapping before do_exit, which should be safe for all archs.

>

> Signed-off-by: Wang Nan <wangnan0@huawei.com>

> Cc: Bob Liu <liubo95@huawei.com>

> Cc: Michal Hocko <mhocko@suse.com>

> Cc: Andrew Morton <akpm@linux-foundation.org>

> Cc: Michal Hocko <mhocko@suse.com>

> Cc: David Rientjes <rientjes@google.com>

> Cc: Ingo Molnar <mingo@kernel.org>

> Cc: Roman Gushchin <guro@fb.com>

> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>

> Cc: Andrea Arcangeli <aarcange@redhat.com>

> ---

>  mm/oom_kill.c | 7 ++++---

>  1 file changed, 4 insertions(+), 3 deletions(-)

>

> diff --git a/mm/oom_kill.c b/mm/oom_kill.c

> index dee0f75..18c5b35 100644

> --- a/mm/oom_kill.c

> +++ b/mm/oom_kill.c

> @@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)

>          */

>         set_bit(MMF_UNSTABLE, &mm->flags);

>

> -       tlb_gather_mmu(&tlb, mm, 0, -1);

>         for (vma = mm->mmap ; vma; vma = vma->vm_next) {

>                 if (!can_madv_dontneed_vma(vma))

>                         continue;

> @@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)

>                  * we do not want to block exit_mmap by keeping mm ref

>                  * count elevated without a good reason.

>                  */

> -               if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))

> +               if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {

> +                       tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);

>                         unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,

>                                          NULL);

> +                       tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);

> +               }

>         }

> -       tlb_finish_mmu(&tlb, 0, -1);

>         pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",

>                         task_pid_nr(tsk), tsk->comm,

>                         K(get_mm_counter(mm, MM_ANONPAGES)),
Michal Hocko Nov. 6, 2017, 8:52 a.m. UTC | #2
On Mon 06-11-17 15:04:40, Bob Liu wrote:
> On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

> > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> > TLB when tlb->fullmm is true:

> >

> >   commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

> >

> 

> CC'ed Will Deacon.

> 

> > Which makes leaking of tlb entries. For example, when oom_reaper

> > selects a task and reaps its virtual memory space, another thread

> > in this task group may still running on another core and access

> > these already freed memory through tlb entries.


No threads should be running in userspace by the time the reaper gets to
unmap their address space. So the only potential case is they are
accessing the user memory from the kernel when we should fault and we
have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing
real?

> > This patch gather each vma instead of gathering full vm space,

> > tlb->fullmm is not true. The behavior of oom reaper become similar

> > to munmapping before do_exit, which should be safe for all archs.


I do not have any objections to do per vma tlb flushing because it would
free gathered pages sooner but I am not sure I see any real problem
here. Have you seen any real issues or this is more of a review driven
fix?

> > Signed-off-by: Wang Nan <wangnan0@huawei.com>

> > Cc: Bob Liu <liubo95@huawei.com>

> > Cc: Michal Hocko <mhocko@suse.com>

> > Cc: Andrew Morton <akpm@linux-foundation.org>

> > Cc: Michal Hocko <mhocko@suse.com>

> > Cc: David Rientjes <rientjes@google.com>

> > Cc: Ingo Molnar <mingo@kernel.org>

> > Cc: Roman Gushchin <guro@fb.com>

> > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>

> > Cc: Andrea Arcangeli <aarcange@redhat.com>

> > ---

> >  mm/oom_kill.c | 7 ++++---

> >  1 file changed, 4 insertions(+), 3 deletions(-)

> >

> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c

> > index dee0f75..18c5b35 100644

> > --- a/mm/oom_kill.c

> > +++ b/mm/oom_kill.c

> > @@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)

> >          */

> >         set_bit(MMF_UNSTABLE, &mm->flags);

> >

> > -       tlb_gather_mmu(&tlb, mm, 0, -1);

> >         for (vma = mm->mmap ; vma; vma = vma->vm_next) {

> >                 if (!can_madv_dontneed_vma(vma))

> >                         continue;

> > @@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)

> >                  * we do not want to block exit_mmap by keeping mm ref

> >                  * count elevated without a good reason.

> >                  */

> > -               if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))

> > +               if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {

> > +                       tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);

> >                         unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,

> >                                          NULL);

> > +                       tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);

> > +               }

> >         }

> > -       tlb_finish_mmu(&tlb, 0, -1);

> >         pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",

> >                         task_pid_nr(tsk), tsk->comm,

> >                         K(get_mm_counter(mm, MM_ANONPAGES)),

> 

> --

> To unsubscribe, send a message with 'unsubscribe linux-mm' in

> the body to majordomo@kvack.org.  For more info on Linux MM,

> see: http://www.linux-mm.org/ .

> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


-- 
Michal Hocko
SUSE Labs
Wang Nan Nov. 6, 2017, 9:59 a.m. UTC | #3
On 2017/11/6 16:52, Michal Hocko wrote:
> On Mon 06-11-17 15:04:40, Bob Liu wrote:

>> On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

>>> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

>>> In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

>>> TLB when tlb->fullmm is true:

>>>

>>>    commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

>>>

>> CC'ed Will Deacon.

>>

>>> Which makes leaking of tlb entries. For example, when oom_reaper

>>> selects a task and reaps its virtual memory space, another thread

>>> in this task group may still running on another core and access

>>> these already freed memory through tlb entries.

> No threads should be running in userspace by the time the reaper gets to

> unmap their address space. So the only potential case is they are

> accessing the user memory from the kernel when we should fault and we

> have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing

> real?

>

>>> This patch gather each vma instead of gathering full vm space,

>>> tlb->fullmm is not true. The behavior of oom reaper become similar

>>> to munmapping before do_exit, which should be safe for all archs.

> I do not have any objections to do per vma tlb flushing because it would

> free gathered pages sooner but I am not sure I see any real problem

> here. Have you seen any real issues or this is more of a review driven

> fix?


We saw the problem when we try to reuse oom reaper's code in
another situation. In our situation, we allow reaping a task
before all other tasks in its task group finish their exiting
procedure.

I'd like to know what ensures "No threads should be running in
userspace by the time the reaper"?

Thank you.

>>> Signed-off-by: Wang Nan <wangnan0@huawei.com>

>>> Cc: Bob Liu <liubo95@huawei.com>

>>> Cc: Michal Hocko <mhocko@suse.com>

>>> Cc: Andrew Morton <akpm@linux-foundation.org>

>>> Cc: Michal Hocko <mhocko@suse.com>

>>> Cc: David Rientjes <rientjes@google.com>

>>> Cc: Ingo Molnar <mingo@kernel.org>

>>> Cc: Roman Gushchin <guro@fb.com>

>>> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>

>>> Cc: Andrea Arcangeli <aarcange@redhat.com>

>>> ---

>>>   mm/oom_kill.c | 7 ++++---

>>>   1 file changed, 4 insertions(+), 3 deletions(-)

>>>

>>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c

>>> index dee0f75..18c5b35 100644

>>> --- a/mm/oom_kill.c

>>> +++ b/mm/oom_kill.c

>>> @@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)

>>>           */

>>>          set_bit(MMF_UNSTABLE, &mm->flags);

>>>

>>> -       tlb_gather_mmu(&tlb, mm, 0, -1);

>>>          for (vma = mm->mmap ; vma; vma = vma->vm_next) {

>>>                  if (!can_madv_dontneed_vma(vma))

>>>                          continue;

>>> @@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)

>>>                   * we do not want to block exit_mmap by keeping mm ref

>>>                   * count elevated without a good reason.

>>>                   */

>>> -               if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))

>>> +               if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {

>>> +                       tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);

>>>                          unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,

>>>                                           NULL);

>>> +                       tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);

>>> +               }

>>>          }

>>> -       tlb_finish_mmu(&tlb, 0, -1);

>>>          pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",

>>>                          task_pid_nr(tsk), tsk->comm,

>>>                          K(get_mm_counter(mm, MM_ANONPAGES)),

>> --

>> To unsubscribe, send a message with 'unsubscribe linux-mm' in

>> the body to majordomo@kvack.org.  For more info on Linux MM,

>> see: http://www.linux-mm.org/ .

>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
Michal Hocko Nov. 6, 2017, 10:40 a.m. UTC | #4
On Mon 06-11-17 17:59:54, Wangnan (F) wrote:
> 

> 

> On 2017/11/6 16:52, Michal Hocko wrote:

> > On Mon 06-11-17 15:04:40, Bob Liu wrote:

> > > On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

> > > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> > > > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> > > > TLB when tlb->fullmm is true:

> > > > 

> > > >    commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

> > > > 

> > > CC'ed Will Deacon.

> > > 

> > > > Which makes leaking of tlb entries. For example, when oom_reaper

> > > > selects a task and reaps its virtual memory space, another thread

> > > > in this task group may still running on another core and access

> > > > these already freed memory through tlb entries.

> > No threads should be running in userspace by the time the reaper gets to

> > unmap their address space. So the only potential case is they are

> > accessing the user memory from the kernel when we should fault and we

> > have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing

> > real?

> > 

> > > > This patch gather each vma instead of gathering full vm space,

> > > > tlb->fullmm is not true. The behavior of oom reaper become similar

> > > > to munmapping before do_exit, which should be safe for all archs.

> > I do not have any objections to do per vma tlb flushing because it would

> > free gathered pages sooner but I am not sure I see any real problem

> > here. Have you seen any real issues or this is more of a review driven

> > fix?

> 

> We saw the problem when we try to reuse oom reaper's code in

> another situation. In our situation, we allow reaping a task

> before all other tasks in its task group finish their exiting

> procedure.

> 

> I'd like to know what ensures "No threads should be running in

> userspace by the time the reaper"?


All tasks are killed by the time. So they should be taken out to the
kernel.
-- 
Michal Hocko
SUSE Labs
Wang Nan Nov. 6, 2017, 11:03 a.m. UTC | #5
On 2017/11/6 18:40, Michal Hocko wrote:
> On Mon 06-11-17 17:59:54, Wangnan (F) wrote:

>>

>> On 2017/11/6 16:52, Michal Hocko wrote:

>>> On Mon 06-11-17 15:04:40, Bob Liu wrote:

>>>> On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

>>>>> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

>>>>> In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

>>>>> TLB when tlb->fullmm is true:

>>>>>

>>>>>     commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

>>>>>

>>>> CC'ed Will Deacon.

>>>>

>>>>> Which makes leaking of tlb entries. For example, when oom_reaper

>>>>> selects a task and reaps its virtual memory space, another thread

>>>>> in this task group may still running on another core and access

>>>>> these already freed memory through tlb entries.

>>> No threads should be running in userspace by the time the reaper gets to

>>> unmap their address space. So the only potential case is they are

>>> accessing the user memory from the kernel when we should fault and we

>>> have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing

>>> real?

>>>

>>>>> This patch gather each vma instead of gathering full vm space,

>>>>> tlb->fullmm is not true. The behavior of oom reaper become similar

>>>>> to munmapping before do_exit, which should be safe for all archs.

>>> I do not have any objections to do per vma tlb flushing because it would

>>> free gathered pages sooner but I am not sure I see any real problem

>>> here. Have you seen any real issues or this is more of a review driven

>>> fix?

>> We saw the problem when we try to reuse oom reaper's code in

>> another situation. In our situation, we allow reaping a task

>> before all other tasks in its task group finish their exiting

>> procedure.

>>

>> I'd like to know what ensures "No threads should be running in

>> userspace by the time the reaper"?

> All tasks are killed by the time. So they should be taken out to the

> kernel.


Sorry. I read oom_kill_process() but still unable to understand
why all tasks are killed.

oom_kill_process() kill victim by sending SIGKILL. It will be
broadcast to all tasks in its task group, but it is asynchronized.
In the following case, race can happen (Thread1 in Task1's task group):

core 1                core 2
Thread1 running       oom_kill_process() selects Task1 as victim
                       oom_kill_process() sends SIGKILL to Task1
                       oom_kill_process() sends SIGKILL to Thread1
                       oom_kill_process() wakes up oom reaper
                       switch to oom_reaper
                       __oom_reap_task_mm
                       tlb_gather_mmu
                       unmap_page_range, reap Task1
                       tlb_finish_mmu
Write page
be kicked off from core
Receives SIGKILL

So what makes Thread1 being kicked off from core 1 before core 2
starting unmapping?

Thank you.
Michal Hocko Nov. 6, 2017, 11:57 a.m. UTC | #6
On Mon 06-11-17 19:03:34, Wangnan (F) wrote:
> 

> 

> On 2017/11/6 18:40, Michal Hocko wrote:

> > On Mon 06-11-17 17:59:54, Wangnan (F) wrote:

> > > 

> > > On 2017/11/6 16:52, Michal Hocko wrote:

> > > > On Mon 06-11-17 15:04:40, Bob Liu wrote:

> > > > > On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

> > > > > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> > > > > > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> > > > > > TLB when tlb->fullmm is true:

> > > > > > 

> > > > > >     commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

> > > > > > 

> > > > > CC'ed Will Deacon.

> > > > > 

> > > > > > Which makes leaking of tlb entries. For example, when oom_reaper

> > > > > > selects a task and reaps its virtual memory space, another thread

> > > > > > in this task group may still running on another core and access

> > > > > > these already freed memory through tlb entries.

> > > > No threads should be running in userspace by the time the reaper gets to

> > > > unmap their address space. So the only potential case is they are

> > > > accessing the user memory from the kernel when we should fault and we

> > > > have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing

> > > > real?

> > > > 

> > > > > > This patch gather each vma instead of gathering full vm space,

> > > > > > tlb->fullmm is not true. The behavior of oom reaper become similar

> > > > > > to munmapping before do_exit, which should be safe for all archs.

> > > > I do not have any objections to do per vma tlb flushing because it would

> > > > free gathered pages sooner but I am not sure I see any real problem

> > > > here. Have you seen any real issues or this is more of a review driven

> > > > fix?

> > > We saw the problem when we try to reuse oom reaper's code in

> > > another situation. In our situation, we allow reaping a task

> > > before all other tasks in its task group finish their exiting

> > > procedure.

> > > 

> > > I'd like to know what ensures "No threads should be running in

> > > userspace by the time the reaper"?

> > All tasks are killed by the time. So they should be taken out to the

> > kernel.

> 

> Sorry. I read oom_kill_process() but still unable to understand

> why all tasks are killed.

> 

> oom_kill_process() kill victim by sending SIGKILL. It will be

> broadcast to all tasks in its task group, but it is asynchronized.

> In the following case, race can happen (Thread1 in Task1's task group):

> 

> core 1                core 2

> Thread1 running       oom_kill_process() selects Task1 as victim

>                       oom_kill_process() sends SIGKILL to Task1

>                       oom_kill_process() sends SIGKILL to Thread1

>                       oom_kill_process() wakes up oom reaper

>                       switch to oom_reaper

>                       __oom_reap_task_mm

>                       tlb_gather_mmu

>                       unmap_page_range, reap Task1

>                       tlb_finish_mmu

> Write page

> be kicked off from core

> Receives SIGKILL

> 

> So what makes Thread1 being kicked off from core 1 before core 2

> starting unmapping?


complete_signal should call signal_wake_up on all threads because this
is a group fatal signal and that should send an IPI to all of the cpus
they run on to. Even if we do not wait for IPI to complete the race
window should be few instructions only while it takes quite some time to
hand over to the oom reaper.

-- 
Michal Hocko
SUSE Labs
Michal Hocko Nov. 6, 2017, 12:27 p.m. UTC | #7
On Mon 06-11-17 09:52:51, Michal Hocko wrote:
> On Mon 06-11-17 15:04:40, Bob Liu wrote:

> > On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

> > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> > > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> > > TLB when tlb->fullmm is true:

> > >

> > >   commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

> > >

> > 

> > CC'ed Will Deacon.

> > 

> > > Which makes leaking of tlb entries. For example, when oom_reaper

> > > selects a task and reaps its virtual memory space, another thread

> > > in this task group may still running on another core and access

> > > these already freed memory through tlb entries.

> 

> No threads should be running in userspace by the time the reaper gets to

> unmap their address space. So the only potential case is they are

> accessing the user memory from the kernel when we should fault and we

> have MMF_UNSTABLE to cause a SIGBUS.


I hope we have clarified that the tasks are not running in userspace at
the time of reaping. I am still wondering whether this is real from the
kernel space via copy_{from,to}_user. Is it possible we won't fault?
I am not sure I understand what "Given that the ASID allocator will
never re-allocate a dirty ASID" means exactly. Will, could you clarify
please?
-- 
Michal Hocko
SUSE Labs
Will Deacon Nov. 7, 2017, 12:54 a.m. UTC | #8
On Mon, Nov 06, 2017 at 01:27:26PM +0100, Michal Hocko wrote:
> On Mon 06-11-17 09:52:51, Michal Hocko wrote:

> > On Mon 06-11-17 15:04:40, Bob Liu wrote:

> > > On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

> > > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> > > > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> > > > TLB when tlb->fullmm is true:

> > > >

> > > >   commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

> > > >

> > > 

> > > CC'ed Will Deacon.

> > > 

> > > > Which makes leaking of tlb entries. For example, when oom_reaper

> > > > selects a task and reaps its virtual memory space, another thread

> > > > in this task group may still running on another core and access

> > > > these already freed memory through tlb entries.

> > 

> > No threads should be running in userspace by the time the reaper gets to

> > unmap their address space. So the only potential case is they are

> > accessing the user memory from the kernel when we should fault and we

> > have MMF_UNSTABLE to cause a SIGBUS.

> 

> I hope we have clarified that the tasks are not running in userspace at

> the time of reaping. I am still wondering whether this is real from the

> kernel space via copy_{from,to}_user. Is it possible we won't fault?

> I am not sure I understand what "Given that the ASID allocator will

> never re-allocate a dirty ASID" means exactly. Will, could you clarify

> please?


Sure. Basically, we tag each address space with an ASID (PCID on x86) which
is resident in the TLB. This means we can elide TLB invalidation when
pulling down a full mm because we won't ever assign that ASID to another mm
without doing TLB invalidation elsewhere (which actually just nukes the
whole TLB).

I think that means that we could potentially not fault on a kernel uaccess,
because we could hit in the TLB. Perhaps a fix would be to set the force
variable in tlb_finish_mmu if MMF_UNSTABLE is set on the mm?

Will
Wang Nan Nov. 7, 2017, 3:51 a.m. UTC | #9
On 2017/11/6 19:57, Michal Hocko wrote:
> On Mon 06-11-17 19:03:34, Wangnan (F) wrote:

>>

>> On 2017/11/6 18:40, Michal Hocko wrote:

>>> On Mon 06-11-17 17:59:54, Wangnan (F) wrote:

>>>> On 2017/11/6 16:52, Michal Hocko wrote:

>>>>> On Mon 06-11-17 15:04:40, Bob Liu wrote:

>>>>>> On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

>>>>>>> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

>>>>>>> In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

>>>>>>> TLB when tlb->fullmm is true:

>>>>>>>

>>>>>>>      commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

>>>>>>>

>>>>>> CC'ed Will Deacon.

>>>>>>

>>>>>>> Which makes leaking of tlb entries. For example, when oom_reaper

>>>>>>> selects a task and reaps its virtual memory space, another thread

>>>>>>> in this task group may still running on another core and access

>>>>>>> these already freed memory through tlb entries.

>>>>> No threads should be running in userspace by the time the reaper gets to

>>>>> unmap their address space. So the only potential case is they are

>>>>> accessing the user memory from the kernel when we should fault and we

>>>>> have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing

>>>>> real?

>>>>>

>>>>>>> This patch gather each vma instead of gathering full vm space,

>>>>>>> tlb->fullmm is not true. The behavior of oom reaper become similar

>>>>>>> to munmapping before do_exit, which should be safe for all archs.

>>>>> I do not have any objections to do per vma tlb flushing because it would

>>>>> free gathered pages sooner but I am not sure I see any real problem

>>>>> here. Have you seen any real issues or this is more of a review driven

>>>>> fix?

>>>> We saw the problem when we try to reuse oom reaper's code in

>>>> another situation. In our situation, we allow reaping a task

>>>> before all other tasks in its task group finish their exiting

>>>> procedure.

>>>>

>>>> I'd like to know what ensures "No threads should be running in

>>>> userspace by the time the reaper"?

>>> All tasks are killed by the time. So they should be taken out to the

>>> kernel.

>> Sorry. I read oom_kill_process() but still unable to understand

>> why all tasks are killed.

>>

>> oom_kill_process() kill victim by sending SIGKILL. It will be

>> broadcast to all tasks in its task group, but it is asynchronized.

>> In the following case, race can happen (Thread1 in Task1's task group):

>>

>> core 1                core 2

>> Thread1 running       oom_kill_process() selects Task1 as victim

>>                        oom_kill_process() sends SIGKILL to Task1

>>                        oom_kill_process() sends SIGKILL to Thread1

>>                        oom_kill_process() wakes up oom reaper

>>                        switch to oom_reaper

>>                        __oom_reap_task_mm

>>                        tlb_gather_mmu

>>                        unmap_page_range, reap Task1

>>                        tlb_finish_mmu

>> Write page

>> be kicked off from core

>> Receives SIGKILL

>>

>> So what makes Thread1 being kicked off from core 1 before core 2

>> starting unmapping?

> complete_signal should call signal_wake_up on all threads because this

> is a group fatal signal and that should send an IPI to all of the cpus

> they run on to. Even if we do not wait for IPI to complete the race

> window should be few instructions only while it takes quite some time to

> hand over to the oom reaper.


If the complete_signal is the mechanism we rely on to ensure
all threads are exited, then I'm sure it is not enough. As
you said, we still have a small race window. In some platform,
an IPI from one core to another core takes a little bit longer
than you may expect, and the core who receive the IPI may in
a very low frequency.

In our situation, we put the reaper code in do_exit after receiving
SIGKILL, and observe TLB entry leaking. Since this is a SIGKILL,
complete_signal should have been executed. So I think oom_reaper
have similar problem.

Thank you.
Michal Hocko Nov. 7, 2017, 7:54 a.m. UTC | #10
On Tue 07-11-17 00:54:32, Will Deacon wrote:
> On Mon, Nov 06, 2017 at 01:27:26PM +0100, Michal Hocko wrote:

> > On Mon 06-11-17 09:52:51, Michal Hocko wrote:

> > > On Mon 06-11-17 15:04:40, Bob Liu wrote:

> > > > On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@huawei.com> wrote:

> > > > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.

> > > > > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush

> > > > > TLB when tlb->fullmm is true:

> > > > >

> > > > >   commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

> > > > >

> > > > 

> > > > CC'ed Will Deacon.

> > > > 

> > > > > Which makes leaking of tlb entries. For example, when oom_reaper

> > > > > selects a task and reaps its virtual memory space, another thread

> > > > > in this task group may still running on another core and access

> > > > > these already freed memory through tlb entries.

> > > 

> > > No threads should be running in userspace by the time the reaper gets to

> > > unmap their address space. So the only potential case is they are

> > > accessing the user memory from the kernel when we should fault and we

> > > have MMF_UNSTABLE to cause a SIGBUS.

> > 

> > I hope we have clarified that the tasks are not running in userspace at

> > the time of reaping. I am still wondering whether this is real from the

> > kernel space via copy_{from,to}_user. Is it possible we won't fault?

> > I am not sure I understand what "Given that the ASID allocator will

> > never re-allocate a dirty ASID" means exactly. Will, could you clarify

> > please?

> 

> Sure. Basically, we tag each address space with an ASID (PCID on x86) which

> is resident in the TLB. This means we can elide TLB invalidation when

> pulling down a full mm because we won't ever assign that ASID to another mm

> without doing TLB invalidation elsewhere (which actually just nukes the

> whole TLB).


Thanks for the clarification!

> I think that means that we could potentially not fault on a kernel uaccess,

> because we could hit in the TLB. Perhaps a fix would be to set the force

> variable in tlb_finish_mmu if MMF_UNSTABLE is set on the mm?


OK, I suspect this is a more likely scenario than a race with the
reschedule IPI discussed elsewhere in the email thread. Even though I
have to admit I have never checked how are IPIs implemented on arm64, so
my perception might be off.

I think it would be best to simply do per VMA tlb gather like the
original patch does. It would be great if the changelog absorbed the
above two paragraphs. Wangnan could you resend with those clarifications
please?
-- 
Michal Hocko
SUSE Labs
diff mbox series

Patch

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index dee0f75..18c5b35 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -532,7 +532,6 @@  static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 	 */
 	set_bit(MMF_UNSTABLE, &mm->flags);
 
-	tlb_gather_mmu(&tlb, mm, 0, -1);
 	for (vma = mm->mmap ; vma; vma = vma->vm_next) {
 		if (!can_madv_dontneed_vma(vma))
 			continue;
@@ -547,11 +546,13 @@  static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 		 * we do not want to block exit_mmap by keeping mm ref
 		 * count elevated without a good reason.
 		 */
-		if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
+		if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
+			tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);
 			unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
 					 NULL);
+			tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);
+		}
 	}
-	tlb_finish_mmu(&tlb, 0, -1);
 	pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
 			task_pid_nr(tsk), tsk->comm,
 			K(get_mm_counter(mm, MM_ANONPAGES)),