mbox series

[0/5] Speed up mremap on large regions

Message ID 20200930222130.4175584-1-kaleshsingh@google.com
Headers show
Series Speed up mremap on large regions | expand

Message

Kalesh Singh Sept. 30, 2020, 10:21 p.m. UTC
mremap time can be optimized by moving entries at the PMD/PUD level if
the source and destination addresses are PMD/PUD-aligned and
PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
x86. Other architectures where this type of move is supported and known to
be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
and HAVE_MOVE_PUD.

Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
region on x86 and arm64:

    - HAVE_MOVE_PMD is already enabled on x86 : N/A
    - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up

    - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
    - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up

          Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
          give a total of ~150x speed up on arm64.


Kalesh Singh (5):
  kselftests: vm: Add mremap tests
  arm64: mremap speedup - Enable HAVE_MOVE_PMD
  mm: Speedup mremap on 1GB or larger regions
  arm64: mremap speedup - Enable HAVE_MOVE_PUD
  x86: mremap speedup - Enable HAVE_MOVE_PUD

 arch/Kconfig                             |   7 +
 arch/arm64/Kconfig                       |   2 +
 arch/arm64/include/asm/pgtable.h         |   1 +
 arch/x86/Kconfig                         |   1 +
 mm/mremap.c                              | 211 +++++++++++++++++---
 tools/testing/selftests/vm/.gitignore    |   1 +
 tools/testing/selftests/vm/Makefile      |   1 +
 tools/testing/selftests/vm/mremap_test.c | 243 +++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests   |  11 +
 9 files changed, 448 insertions(+), 30 deletions(-)
 create mode 100644 tools/testing/selftests/vm/mremap_test.c

Comments

Kalesh Singh Sept. 30, 2020, 10:21 p.m. UTC | #1
HAVE_MOVE_PMD enables remapping pages at the PMD level if both the
source and destination addresses are PMD-aligned.

HAVE_MOVE_PMD is already enabled on x86. The original patch [1] that
introduced this config did not enable it on arm64 at the time because
of performance issues with flushing the TLB on every PMD move. These
issues have since been addressed in more recent releases with
improvements to the arm64 TLB invalidation and core mmu_gather code as
Will Deacon mentioned in [2].
Kirill A. Shutemov Sept. 30, 2020, 10:32 p.m. UTC | #2
On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:
> mremap time can be optimized by moving entries at the PMD/PUD level if
> the source and destination addresses are PMD/PUD-aligned and
> PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
> x86. Other architectures where this type of move is supported and known to
> be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
> and HAVE_MOVE_PUD.
> 
> Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
> region on x86 and arm64:
> 
>     - HAVE_MOVE_PMD is already enabled on x86 : N/A
>     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up
> 
>     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
>     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up
> 
>           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
>           give a total of ~150x speed up on arm64.

Is there a *real* workload that benefit from HAVE_MOVE_PUD?
Lokesh Gidra Sept. 30, 2020, 10:42 p.m. UTC | #3
On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
<kirill.shutemov@linux.intel.com> wrote:
>

> On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:

> > mremap time can be optimized by moving entries at the PMD/PUD level if

> > the source and destination addresses are PMD/PUD-aligned and

> > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and

> > x86. Other architectures where this type of move is supported and known to

> > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD

> > and HAVE_MOVE_PUD.

> >

> > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized

> > region on x86 and arm64:

> >

> >     - HAVE_MOVE_PMD is already enabled on x86 : N/A

> >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up

> >

> >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up

> >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up

> >

> >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD

> >           give a total of ~150x speed up on arm64.

>

> Is there a *real* workload that benefit from HAVE_MOVE_PUD?

>

We have a Java garbage collector under development which requires
moving physical pages of multi-gigabyte heap using mremap. During this
move, the application threads have to be paused for correctness. It is
critical to keep this pause as short as possible to avoid jitters
during user interaction. This is where HAVE_MOVE_PUD will greatly
help.
> --

>  Kirill A. Shutemov
Joel Fernandes Sept. 30, 2020, 10:46 p.m. UTC | #4
On Wed, Sep 30, 2020 at 6:42 PM Lokesh Gidra <lokeshgidra@google.com> wrote:
>
> On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
> <kirill.shutemov@linux.intel.com> wrote:
> >
> > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:
> > > mremap time can be optimized by moving entries at the PMD/PUD level if
> > > the source and destination addresses are PMD/PUD-aligned and
> > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
> > > x86. Other architectures where this type of move is supported and known to
> > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
> > > and HAVE_MOVE_PUD.
> > >
> > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
> > > region on x86 and arm64:
> > >
> > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A
> > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up
> > >
> > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
> > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up
> > >
> > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
> > >           give a total of ~150x speed up on arm64.
> >
> > Is there a *real* workload that benefit from HAVE_MOVE_PUD?
> >
> We have a Java garbage collector under development which requires
> moving physical pages of multi-gigabyte heap using mremap. During this
> move, the application threads have to be paused for correctness. It is
> critical to keep this pause as short as possible to avoid jitters
> during user interaction. This is where HAVE_MOVE_PUD will greatly
> help.

And that detail should totally have gone into the commit message :-/

Thanks,

 - Joel
Kalesh Singh Sept. 30, 2020, 11:03 p.m. UTC | #5
On Wed, Sep 30, 2020 at 6:47 PM Joel Fernandes <joelaf@google.com> wrote:
>

> On Wed, Sep 30, 2020 at 6:42 PM Lokesh Gidra <lokeshgidra@google.com> wrote:

> >

> > On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov

> > <kirill.shutemov@linux.intel.com> wrote:

> > >

> > > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:

> > > > mremap time can be optimized by moving entries at the PMD/PUD level if

> > > > the source and destination addresses are PMD/PUD-aligned and

> > > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and

> > > > x86. Other architectures where this type of move is supported and known to

> > > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD

> > > > and HAVE_MOVE_PUD.

> > > >

> > > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized

> > > > region on x86 and arm64:

> > > >

> > > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A

> > > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up

> > > >

> > > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up

> > > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up

> > > >

> > > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD

> > > >           give a total of ~150x speed up on arm64.

> > >

> > > Is there a *real* workload that benefit from HAVE_MOVE_PUD?

> > >

> > We have a Java garbage collector under development which requires

> > moving physical pages of multi-gigabyte heap using mremap. During this

> > move, the application threads have to be paused for correctness. It is

> > critical to keep this pause as short as possible to avoid jitters

> > during user interaction. This is where HAVE_MOVE_PUD will greatly

> > help.

>

> And that detail should totally have gone into the commit message :-/

Hi Joel,
The patch that introduces HAVE_MOVE_PUD in the series mentions the
Android garbage collection use case. I can add these details there in
the next version.
Thanks,
Kalesh
>

> Thanks,

>

>  - Joel
John Hubbard Oct. 1, 2020, 7:24 a.m. UTC | #6
On 9/30/20 3:21 PM, Kalesh Singh wrote:
> Test mremap on regions of various sizes and alignments and validate

> data after remapping. Also provide total time for remapping

> the region which is useful for performance comparison of the mremap

> optimizations that move pages at the PMD/PUD levels if HAVE_MOVE_PMD

> and/or HAVE_MOVE_PUD are enabled.

> 

> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

> ---

>   tools/testing/selftests/vm/.gitignore    |   1 +

>   tools/testing/selftests/vm/Makefile      |   1 +

>   tools/testing/selftests/vm/mremap_test.c | 243 +++++++++++++++++++++++

>   tools/testing/selftests/vm/run_vmtests   |  11 +

>   4 files changed, 256 insertions(+)

>   create mode 100644 tools/testing/selftests/vm/mremap_test.c

> 


Hi,

This takes 100x longer to run than it should: 1:46 min of running time on
my x86_64 test machine. The entire selftests/vm test suite takes 45 sec on a
bad day, where a bad day is defined as up until about tomorrow, when I will
post a compaction_test.c patch that will cut that time down to about half, or
24 sec total run time...for 22 tests!

In other words, most tests here should take about 1 or 2 seconds, unless they
are exceptionally special snowflakes.

At the very least, the invocation within run_vmtests could pass in a parameter
to tell it to run a shorter test. But there's also opportunities to speed it
up, too.

...
> +

> +#define MAKE_TEST(source_align, destination_align, size,	\

> +		  overlaps, should_fail, test_name)		\

> +{								\

> +	.name = test_name,					\

> +	.config = {						\

> +		.src_alignment = source_align,			\

> +		.dest_alignment = destination_align,		\

> +		.region_size = size,				\

> +		.overlapping = overlaps,			\

> +	},							\

> +	.expect_failure = should_fail				\

> +}

> +


OK...

> +#define MAKE_SIMPLE_TEST(source_align, destination_align, size)	\

> +	MAKE_TEST(source_align, destination_align, size, 0, 0,	\

> +		  #size " mremap - Source " #source_align	\

> +		  " aligned, Destination " #destination_align	\

> +		  " aligned")

> +


...and not OK. :)  Because this is just obscuring things. Both the
code and the output are harder to read. For these tiny test programs,
clarity is what we want, not necessarily compactness on the screen.
Because people want to get in, understand what they seen in the code
and match it up with what is printed to stdout--without spending much
time. (And that includes run time, as hinted at above.)

...
> +

> +/* Returns the time taken for the remap on success else returns -1. */

> +static long long remap_region(struct config c)

> +{

> +	void *addr, *src_addr, *dest_addr;

> +	int i, j;

> +	struct timespec t_start = {0, 0}, t_end = {0, 0};

> +	long long  start_ns, end_ns, align_mask, ret, offset;

> +	char pattern[] = {0xa8, 0xcd, 0xfe};


I'd recommend using rand() to help choose the pattern, and using different
patterns for different runs. When testing memory, it's a pitfall to have
the same test pattern.

Normally, you'd also want to report the random seed or the test pattern(s)
or both to stdout, and provide a way to run with the same pattern, but
here I don't *think* you care: all patterns should have the same performance.

> +	int pattern_size = ARRAY_SIZE(pattern);

> +

> +	src_addr = get_source_mapping(c);

> +	if (!src_addr) {

> +		ret = -1;

> +		goto out;

> +	}

> +

> +	/* Set byte pattern */

> +	for (i = 0; i < c.region_size; i++) {

> +		for (j = 0; i+j < c.region_size && j < pattern_size; j++)

> +			memset((char *) src_addr + i+j, pattern[j], 1);

> +		i += pattern_size-1;

> +	}

> +

> +	align_mask = ~(c.dest_alignment - 1);

> +	offset = (c.overlapping) ? -c.dest_alignment : c.dest_alignment;


A comment for what the above two lines are doing would be a nice touch.

...
> +	start_ns = t_start.tv_sec * 1000000000ULL + t_start.tv_nsec;

> +	end_ns = t_end.tv_sec * 1000000000ULL + t_end.tv_nsec;


A const or #defined for all those 0000's would help.

...
> +int main(int argc, char *argv[])

> +{

> +	int failures = 0;

> +	int i;

> +

> +	struct test test_cases[] = {

> +		/* Expected mremap failures */

> +		MAKE_TEST(_4KB, _4KB, _4KB, 1 /* overlaps */, 1 /* fails */,


Named flags instead of 1's and 0's would avoid the need for messy comments.

> +			  "mremap - Source and Destination Regions Overlapping"),

> +		MAKE_TEST(_4KB, _1KB, _4KB, 0 /* overlaps */, 1 /* fails */,

> +			  "mremap - Destination Address Misaligned (1KB-aligned)"),

> +		MAKE_TEST(_1KB, _4KB, _4KB, 0 /* overlaps */, 1 /* fails */,

> +			  "mremap - Source Address Misaligned (1KB-aligned)"),

> +

> +		/* Src addr PTE aligned */

> +		MAKE_SIMPLE_TEST(PTE, PTE, _8KB),

> +

> +		/* Src addr 1MB aligned */

> +		MAKE_SIMPLE_TEST(_1MB, PTE, _2MB),

> +		MAKE_SIMPLE_TEST(_1MB, _1MB, _2MB),

> +

> +		/* Src addr PMD aligned */

> +		MAKE_SIMPLE_TEST(PMD, PTE, _4MB),

> +		MAKE_SIMPLE_TEST(PMD, _1MB, _4MB),

> +		MAKE_SIMPLE_TEST(PMD, PMD, _4MB),

> +

> +		/* Src addr PUD aligned */

> +		MAKE_SIMPLE_TEST(PUD, PTE, _2GB),

> +		MAKE_SIMPLE_TEST(PUD, _1MB, _2GB),

> +		MAKE_SIMPLE_TEST(PUD, PMD, _2GB),

> +		MAKE_SIMPLE_TEST(PUD, PUD, _2GB),



Too concise. Not fun lining these up with the stdout report.


thanks,
-- 
John Hubbard
NVIDIA
Kirill A. Shutemov Oct. 1, 2020, 12:27 p.m. UTC | #7
On Wed, Sep 30, 2020 at 03:42:17PM -0700, Lokesh Gidra wrote:
> On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
> <kirill.shutemov@linux.intel.com> wrote:
> >
> > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:
> > > mremap time can be optimized by moving entries at the PMD/PUD level if
> > > the source and destination addresses are PMD/PUD-aligned and
> > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
> > > x86. Other architectures where this type of move is supported and known to
> > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
> > > and HAVE_MOVE_PUD.
> > >
> > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
> > > region on x86 and arm64:
> > >
> > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A
> > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up
> > >
> > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
> > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up
> > >
> > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
> > >           give a total of ~150x speed up on arm64.
> >
> > Is there a *real* workload that benefit from HAVE_MOVE_PUD?
> >
> We have a Java garbage collector under development which requires
> moving physical pages of multi-gigabyte heap using mremap. During this
> move, the application threads have to be paused for correctness. It is
> critical to keep this pause as short as possible to avoid jitters
> during user interaction. This is where HAVE_MOVE_PUD will greatly
> help.

Any chance to quantify the effect of mremap() with and without
HAVE_MOVE_PUD?

I doubt it's a major contributor to the GC pause. I expect you need to
move tens of gigs to get sizable effect. And if your GC routinely moves
tens of gigs, maybe problem somewhere else?

I'm asking for numbers, because increase in complexity comes with cost.
If it doesn't provide an substantial benefit to a real workload
maintaining the code forever doesn't make sense.
Kirill A. Shutemov Oct. 1, 2020, 12:36 p.m. UTC | #8
On Wed, Sep 30, 2020 at 10:21:20PM +0000, Kalesh Singh wrote:
> Android needs to move large memory regions for garbage collection.

> Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD

> level if the source and destination addresses are PUD-aligned.

> For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves

> PGD entries, since the PUD entry is “folded back” onto the PGD entry.

> Add HAVE_MOVE_PUD so that architectures where moving at the PUD level

> isn't supported/tested can turn this off by not selecting the config.

> 

> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

> ---

>  arch/Kconfig                     |   7 +

>  arch/arm64/include/asm/pgtable.h |   1 +

>  mm/mremap.c                      | 211 ++++++++++++++++++++++++++-----

>  3 files changed, 189 insertions(+), 30 deletions(-)

> 

> diff --git a/arch/Kconfig b/arch/Kconfig

> index af14a567b493..5eabaa00bf9b 100644

> --- a/arch/Kconfig

> +++ b/arch/Kconfig

> @@ -602,6 +602,13 @@ config HAVE_IRQ_TIME_ACCOUNTING

>  	  Archs need to ensure they use a high enough resolution clock to

>  	  support irq time accounting and then call enable_sched_clock_irqtime().

>  

> +config HAVE_MOVE_PUD

> +	bool

> +	help

> +	  Architectures that select this are able to move page tables at the

> +	  PUD level. If there are only 3 page table levels, the move effectively

> +	  happens at the PGD level.

> +

>  config HAVE_MOVE_PMD

>  	bool

>  	help

> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h

> index d5d3fbe73953..8848125e3024 100644

> --- a/arch/arm64/include/asm/pgtable.h

> +++ b/arch/arm64/include/asm/pgtable.h

> @@ -415,6 +415,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)

>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))

>  

>  #define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))

> +#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))

>  

>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))

>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)


This doesn't belong to the patch.

> diff --git a/mm/mremap.c b/mm/mremap.c

> index 138abbae4f75..a5a1440bd366 100644

> --- a/mm/mremap.c

> +++ b/mm/mremap.c

> @@ -249,14 +249,167 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,

>  

>  	return true;

>  }

> +#else

> +static inline bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,

> +		  unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)

> +{

> +	return false;

> +}

>  #endif

>  

> +#ifdef CONFIG_HAVE_MOVE_PUD

> +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr)

> +{

> +	pgd_t *pgd;

> +	p4d_t *p4d;

> +	pud_t *pud;

> +

> +	pgd = pgd_offset(mm, addr);

> +	if (pgd_none_or_clear_bad(pgd))

> +		return NULL;

> +

> +	p4d = p4d_offset(pgd, addr);

> +	if (p4d_none_or_clear_bad(p4d))

> +		return NULL;

> +

> +	pud = pud_offset(p4d, addr);

> +	if (pud_none_or_clear_bad(pud))

> +		return NULL;

> +

> +	return pud;

> +}

> +

> +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma,

> +			    unsigned long addr)

> +{

> +	pgd_t *pgd;

> +	p4d_t *p4d;

> +	pud_t *pud;

> +

> +	pgd = pgd_offset(mm, addr);

> +	p4d = p4d_alloc(mm, pgd, addr);

> +	if (!p4d)

> +		return NULL;

> +	pud = pud_alloc(mm, p4d, addr);

> +	if (!pud)

> +		return NULL;

> +

> +	return pud;

> +}

> +

> +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,

> +		  unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)

> +{

> +	spinlock_t *old_ptl, *new_ptl;

> +	struct mm_struct *mm = vma->vm_mm;

> +	pud_t pud;

> +

> +	/*

> +	 * The destination pud shouldn't be established, free_pgtables()

> +	 * should have released it.

> +	 */

> +	if (WARN_ON_ONCE(!pud_none(*new_pud)))

> +		return false;

> +

> +	/*

> +	 * We don't have to worry about the ordering of src and dst

> +	 * ptlocks because exclusive mmap_lock prevents deadlock.

> +	 */

> +	old_ptl = pud_lock(vma->vm_mm, old_pud);

> +	new_ptl = pud_lockptr(mm, new_pud);

> +	if (new_ptl != old_ptl)

> +		spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);

> +

> +	/* Clear the pud */

> +	pud = *old_pud;

> +	pud_clear(old_pud);

> +

> +	VM_BUG_ON(!pud_none(*new_pud));

> +

> +	/* Set the new pud */

> +	set_pud_at(mm, new_addr, new_pud, pud);

> +	flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE);

> +	if (new_ptl != old_ptl)

> +		spin_unlock(new_ptl);

> +	spin_unlock(old_ptl);

> +

> +	return true;

> +}

> +#else

> +static inline bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,

> +		  unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)

> +{

> +	return false;

> +}

> +#endif

> +

> +enum pgt_entry {

> +	NORMAL_PMD,

> +	HPAGE_PMD,

> +	NORMAL_PUD,

> +};

> +

> +/*

> + * Returns an extent of the corresponding size for the pgt_entry specified if valid.

> + * Else returns a smaller extent bounded by the end of the source and destination

> + * pgt_entry. Returns 0 if an invalid pgt_entry is specified.

> + */

> +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr,

> +			unsigned long old_end, unsigned long new_addr)

> +{

> +	unsigned long next, extent, mask, size;

> +

> +	if (entry == NORMAL_PMD || entry == HPAGE_PMD) {

> +		mask = PMD_MASK;

> +		size = PMD_SIZE;

> +	} else if (entry == NORMAL_PUD) {

> +		mask = PUD_MASK;

> +		size = PUD_SIZE;

> +	} else

> +		return 0;

> +

> +	next = (old_addr + size) & mask;

> +	/* even if next overflowed, extent below will be ok */

> +	extent = (next > old_end) ? old_end - old_addr : next - old_addr;

> +	next = (new_addr + size) & mask;

> +	if (extent > next - new_addr)

> +		extent = next - new_addr;

> +	return extent;

> +}

> +

> +/*

> + * Attempts to speedup the move by moving entry at the level corresponding to

> + * pgt_entry. Returns true if the move was successful, else false.

> + */

> +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma,

> +			unsigned long old_addr, unsigned long new_addr, void *old_entry,

> +			void *new_entry, bool need_rmap_locks)

> +{

> +	bool moved = false;

> +

> +	/* See comment in move_ptes() */

> +	if (need_rmap_locks)

> +		take_rmap_locks(vma);

> +	if (entry == NORMAL_PMD)

> +		moved =  move_normal_pmd(vma, old_addr, new_addr, old_entry, new_entry);

> +	else if (entry == NORMAL_PUD)

> +		moved =  move_normal_pud(vma, old_addr, new_addr, old_entry, new_entry);

> +	else if (entry == HPAGE_PMD)

> +		moved =  move_huge_pmd(vma, old_addr, new_addr, old_entry, new_entry);

> +	else

> +		WARN_ON_ONCE(1);


BUILD_BUG() should work.

And why not use switch() instead of ifs.

> +	if (need_rmap_locks)

> +		drop_rmap_locks(vma);

> +

> +	return moved;

> +}

> +

>  unsigned long move_page_tables(struct vm_area_struct *vma,

>  		unsigned long old_addr, struct vm_area_struct *new_vma,

>  		unsigned long new_addr, unsigned long len,

>  		bool need_rmap_locks)

>  {

> -	unsigned long extent, next, old_end;

> +	unsigned long extent, old_end;

>  	struct mmu_notifier_range range;

>  	pmd_t *old_pmd, *new_pmd;

>  

> @@ -269,14 +422,27 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

>  

>  	for (; old_addr < old_end; old_addr += extent, new_addr += extent) {

>  		cond_resched();

> -		next = (old_addr + PMD_SIZE) & PMD_MASK;

> -		/* even if next overflowed, extent below will be ok */

> -		extent = next - old_addr;

> -		if (extent > old_end - old_addr)

> -			extent = old_end - old_addr;

> -		next = (new_addr + PMD_SIZE) & PMD_MASK;

> -		if (extent > next - new_addr)

> -			extent = next - new_addr;

> +#ifdef CONFIG_HAVE_MOVE_PUD

> +		/*

> +		 * If extent is PUD-sized try to speed up the move by moving at the

> +		 * PUD level if possible.

> +		 */

> +		extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr);

> +		if (extent == PUD_SIZE) {

> +			pud_t *old_pud, *new_pud;

> +

> +			old_pud = get_old_pud(vma->vm_mm, old_addr);

> +			if (!old_pud)

> +				continue;

> +			new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr);

> +			if (!new_pud)

> +				break;

> +			if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr,

> +					   old_pud, new_pud, need_rmap_locks))

> +				continue;

> +		}

> +#endif

> +		extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr);

>  		old_pmd = get_old_pmd(vma->vm_mm, old_addr);

>  		if (!old_pmd)

>  			continue;

> @@ -284,18 +450,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

>  		if (!new_pmd)

>  			break;

>  		if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) {

> -			if (extent == HPAGE_PMD_SIZE) {

> -				bool moved;

> -				/* See comment in move_ptes() */

> -				if (need_rmap_locks)

> -					take_rmap_locks(vma);

> -				moved = move_huge_pmd(vma, old_addr, new_addr,

> -						      old_pmd, new_pmd);

> -				if (need_rmap_locks)

> -					drop_rmap_locks(vma);

> -				if (moved)

> -					continue;

> -			}

> +			if (extent == HPAGE_PMD_SIZE &&

> +			    move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd,

> +					   new_pmd, need_rmap_locks))

> +				continue;

>  			split_huge_pmd(vma, old_pmd, old_addr);

>  			if (pmd_trans_unstable(old_pmd))

>  				continue;

> @@ -305,15 +463,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

>  			 * If the extent is PMD-sized, try to speed the move by

>  			 * moving at the PMD level if possible.

>  			 */

> -			bool moved;

> -

> -			if (need_rmap_locks)

> -				take_rmap_locks(vma);

> -			moved = move_normal_pmd(vma, old_addr, new_addr,

> -						old_pmd, new_pmd);

> -			if (need_rmap_locks)

> -				drop_rmap_locks(vma);

> -			if (moved)

> +			if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, old_pmd,

> +					   new_pmd, need_rmap_locks))

>  				continue;

>  #endif

>  		}

> -- 

> 2.28.0.709.gb0816b6eb0-goog

> 


-- 
 Kirill A. Shutemov
Kalesh Singh Oct. 1, 2020, 3:46 p.m. UTC | #9
On Thu, Oct 1, 2020 at 3:24 AM John Hubbard <jhubbard@nvidia.com> wrote:
>
> On 9/30/20 3:21 PM, Kalesh Singh wrote:
> > Test mremap on regions of various sizes and alignments and validate
> > data after remapping. Also provide total time for remapping
> > the region which is useful for performance comparison of the mremap
> > optimizations that move pages at the PMD/PUD levels if HAVE_MOVE_PMD
> > and/or HAVE_MOVE_PUD are enabled.
> >
> > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > ---
> >   tools/testing/selftests/vm/.gitignore    |   1 +
> >   tools/testing/selftests/vm/Makefile      |   1 +
> >   tools/testing/selftests/vm/mremap_test.c | 243 +++++++++++++++++++++++
> >   tools/testing/selftests/vm/run_vmtests   |  11 +
> >   4 files changed, 256 insertions(+)
> >   create mode 100644 tools/testing/selftests/vm/mremap_test.c
> >
>
> Hi,
>
> This takes 100x longer to run than it should: 1:46 min of running time on
> my x86_64 test machine. The entire selftests/vm test suite takes 45 sec on a
> bad day, where a bad day is defined as up until about tomorrow, when I will
> post a compaction_test.c patch that will cut that time down to about half, or
> 24 sec total run time...for 22 tests!
>
> In other words, most tests here should take about 1 or 2 seconds, unless they
> are exceptionally special snowflakes.
>
> At the very least, the invocation within run_vmtests could pass in a parameter
> to tell it to run a shorter test. But there's also opportunities to speed it
> up, too.


Hi John. Thanks for the comments.

The bulk of the test time comes from setting and verifying the byte
pattern in 1GB
or larger regions for testing the HAVE_MOVE_PUD functionality. Without testing
1GB or larger regions the test takes 0.18 seconds on my x86_64 system.

One option I think would be to only validate to a certain threshold of the remap
region. We can have a flag to specify a threshold or to validate the
full size of the
remapped region. I did some initial testing with a 4MB threshold and
the total time
dropped to 0.38 seconds from 1:12 minutes (for verifying the entire
remapped region).
The 4MB threshold would cover the full region of all the tests
excluding those for the
1GB and 2GB sized regions. Let me know what you think.

Your other comments below sound good to me. I’ll make those changes in
the next version.

Thanks,
Kalesh

>
> ...
> > +
> > +#define MAKE_TEST(source_align, destination_align, size,     \
> > +               overlaps, should_fail, test_name)             \
> > +{                                                            \
> > +     .name = test_name,                                      \
> > +     .config = {                                             \
> > +             .src_alignment = source_align,                  \
> > +             .dest_alignment = destination_align,            \
> > +             .region_size = size,                            \
> > +             .overlapping = overlaps,                        \
> > +     },                                                      \
> > +     .expect_failure = should_fail                           \
> > +}
> > +
>
> OK...
>
> > +#define MAKE_SIMPLE_TEST(source_align, destination_align, size)      \
> > +     MAKE_TEST(source_align, destination_align, size, 0, 0,  \
> > +               #size " mremap - Source " #source_align       \
> > +               " aligned, Destination " #destination_align   \
> > +               " aligned")
> > +
>
> ...and not OK. :)  Because this is just obscuring things. Both the
> code and the output are harder to read. For these tiny test programs,
> clarity is what we want, not necessarily compactness on the screen.
> Because people want to get in, understand what they seen in the code
> and match it up with what is printed to stdout--without spending much
> time. (And that includes run time, as hinted at above.)
>
> ...
> > +
> > +/* Returns the time taken for the remap on success else returns -1. */
> > +static long long remap_region(struct config c)
> > +{
> > +     void *addr, *src_addr, *dest_addr;
> > +     int i, j;
> > +     struct timespec t_start = {0, 0}, t_end = {0, 0};
> > +     long long  start_ns, end_ns, align_mask, ret, offset;
> > +     char pattern[] = {0xa8, 0xcd, 0xfe};
>
> I'd recommend using rand() to help choose the pattern, and using different
> patterns for different runs. When testing memory, it's a pitfall to have
> the same test pattern.
>
> Normally, you'd also want to report the random seed or the test pattern(s)
> or both to stdout, and provide a way to run with the same pattern, but
> here I don't *think* you care: all patterns should have the same performance.
>
> > +     int pattern_size = ARRAY_SIZE(pattern);
> > +
> > +     src_addr = get_source_mapping(c);
> > +     if (!src_addr) {
> > +             ret = -1;
> > +             goto out;
> > +     }
> > +
> > +     /* Set byte pattern */
> > +     for (i = 0; i < c.region_size; i++) {
> > +             for (j = 0; i+j < c.region_size && j < pattern_size; j++)
> > +                     memset((char *) src_addr + i+j, pattern[j], 1);
> > +             i += pattern_size-1;
> > +     }
> > +
> > +     align_mask = ~(c.dest_alignment - 1);
> > +     offset = (c.overlapping) ? -c.dest_alignment : c.dest_alignment;
>
> A comment for what the above two lines are doing would be a nice touch.
>
> ...
> > +     start_ns = t_start.tv_sec * 1000000000ULL + t_start.tv_nsec;
> > +     end_ns = t_end.tv_sec * 1000000000ULL + t_end.tv_nsec;
>
> A const or #defined for all those 0000's would help.
>
> ...
> > +int main(int argc, char *argv[])
> > +{
> > +     int failures = 0;
> > +     int i;
> > +
> > +     struct test test_cases[] = {
> > +             /* Expected mremap failures */
> > +             MAKE_TEST(_4KB, _4KB, _4KB, 1 /* overlaps */, 1 /* fails */,
>
> Named flags instead of 1's and 0's would avoid the need for messy comments.
>
> > +                       "mremap - Source and Destination Regions Overlapping"),
> > +             MAKE_TEST(_4KB, _1KB, _4KB, 0 /* overlaps */, 1 /* fails */,
> > +                       "mremap - Destination Address Misaligned (1KB-aligned)"),
> > +             MAKE_TEST(_1KB, _4KB, _4KB, 0 /* overlaps */, 1 /* fails */,
> > +                       "mremap - Source Address Misaligned (1KB-aligned)"),
> > +
> > +             /* Src addr PTE aligned */
> > +             MAKE_SIMPLE_TEST(PTE, PTE, _8KB),
> > +
> > +             /* Src addr 1MB aligned */
> > +             MAKE_SIMPLE_TEST(_1MB, PTE, _2MB),
> > +             MAKE_SIMPLE_TEST(_1MB, _1MB, _2MB),
> > +
> > +             /* Src addr PMD aligned */
> > +             MAKE_SIMPLE_TEST(PMD, PTE, _4MB),
> > +             MAKE_SIMPLE_TEST(PMD, _1MB, _4MB),
> > +             MAKE_SIMPLE_TEST(PMD, PMD, _4MB),
> > +
> > +             /* Src addr PUD aligned */
> > +             MAKE_SIMPLE_TEST(PUD, PTE, _2GB),
> > +             MAKE_SIMPLE_TEST(PUD, _1MB, _2GB),
> > +             MAKE_SIMPLE_TEST(PUD, PMD, _2GB),
> > +             MAKE_SIMPLE_TEST(PUD, PUD, _2GB),
>
>
> Too concise. Not fun lining these up with the stdout report.
>
>
> thanks,
> --
> John Hubbard
> NVIDIA
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>
Kalesh Singh Oct. 1, 2020, 3:59 p.m. UTC | #10
On Thu, Oct 1, 2020 at 8:27 AM Kirill A. Shutemov
<kirill.shutemov@linux.intel.com> wrote:
>
> On Wed, Sep 30, 2020 at 03:42:17PM -0700, Lokesh Gidra wrote:
> > On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
> > <kirill.shutemov@linux.intel.com> wrote:
> > >
> > > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:
> > > > mremap time can be optimized by moving entries at the PMD/PUD level if
> > > > the source and destination addresses are PMD/PUD-aligned and
> > > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
> > > > x86. Other architectures where this type of move is supported and known to
> > > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
> > > > and HAVE_MOVE_PUD.
> > > >
> > > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
> > > > region on x86 and arm64:
> > > >
> > > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A
> > > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up
> > > >
> > > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
> > > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up
> > > >
> > > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
> > > >           give a total of ~150x speed up on arm64.
> > >
> > > Is there a *real* workload that benefit from HAVE_MOVE_PUD?
> > >
> > We have a Java garbage collector under development which requires
> > moving physical pages of multi-gigabyte heap using mremap. During this
> > move, the application threads have to be paused for correctness. It is
> > critical to keep this pause as short as possible to avoid jitters
> > during user interaction. This is where HAVE_MOVE_PUD will greatly
> > help.
>
> Any chance to quantify the effect of mremap() with and without
> HAVE_MOVE_PUD?
>
> I doubt it's a major contributor to the GC pause. I expect you need to
> move tens of gigs to get sizable effect. And if your GC routinely moves
> tens of gigs, maybe problem somewhere else?
>
> I'm asking for numbers, because increase in complexity comes with cost.
> If it doesn't provide an substantial benefit to a real workload
> maintaining the code forever doesn't make sense.

Lokesh on this thread would be better able to answer this. I'll let
him weigh in here.
Thanks, Kalesh
>
> --
>  Kirill A. Shutemov
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>
Kalesh Singh Oct. 1, 2020, 4:40 p.m. UTC | #11
On Thu, Oct 1, 2020 at 8:37 AM Kirill A. Shutemov
<kirill.shutemov@linux.intel.com> wrote:
>

> On Wed, Sep 30, 2020 at 10:21:20PM +0000, Kalesh Singh wrote:

> > Android needs to move large memory regions for garbage collection.

> > Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD

> > level if the source and destination addresses are PUD-aligned.

> > For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves

> > PGD entries, since the PUD entry is “folded back” onto the PGD entry.

> > Add HAVE_MOVE_PUD so that architectures where moving at the PUD level

> > isn't supported/tested can turn this off by not selecting the config.

> >

> > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

> > ---

> >  arch/Kconfig                     |   7 +

> >  arch/arm64/include/asm/pgtable.h |   1 +

> >  mm/mremap.c                      | 211 ++++++++++++++++++++++++++-----

> >  3 files changed, 189 insertions(+), 30 deletions(-)

> >

> > diff --git a/arch/Kconfig b/arch/Kconfig

> > index af14a567b493..5eabaa00bf9b 100644

> > --- a/arch/Kconfig

> > +++ b/arch/Kconfig

> > @@ -602,6 +602,13 @@ config HAVE_IRQ_TIME_ACCOUNTING

> >         Archs need to ensure they use a high enough resolution clock to

> >         support irq time accounting and then call enable_sched_clock_irqtime().

> >

> > +config HAVE_MOVE_PUD

> > +     bool

> > +     help

> > +       Architectures that select this are able to move page tables at the

> > +       PUD level. If there are only 3 page table levels, the move effectively

> > +       happens at the PGD level.

> > +

> >  config HAVE_MOVE_PMD

> >       bool

> >       help

> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h

> > index d5d3fbe73953..8848125e3024 100644

> > --- a/arch/arm64/include/asm/pgtable.h

> > +++ b/arch/arm64/include/asm/pgtable.h

> > @@ -415,6 +415,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)

> >  #define pfn_pud(pfn,prot)    __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))

> >

> >  #define set_pmd_at(mm, addr, pmdp, pmd)      set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))

> > +#define set_pud_at(mm, addr, pudp, pud)      set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))

> >

> >  #define __p4d_to_phys(p4d)   __pte_to_phys(p4d_pte(p4d))

> >  #define __phys_to_p4d_val(phys)      __phys_to_pte_val(phys)

>

> This doesn't belong to the patch.

Good catch. I'll move this into a separate patch.
>

> > diff --git a/mm/mremap.c b/mm/mremap.c

> > index 138abbae4f75..a5a1440bd366 100644

> > --- a/mm/mremap.c

> > +++ b/mm/mremap.c

> > @@ -249,14 +249,167 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,

> >

> >       return true;

> >  }

> > +#else

> > +static inline bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,

> > +               unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)

> > +{

> > +     return false;

> > +}

> >  #endif

> >

> > +#ifdef CONFIG_HAVE_MOVE_PUD

> > +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr)

> > +{

> > +     pgd_t *pgd;

> > +     p4d_t *p4d;

> > +     pud_t *pud;

> > +

> > +     pgd = pgd_offset(mm, addr);

> > +     if (pgd_none_or_clear_bad(pgd))

> > +             return NULL;

> > +

> > +     p4d = p4d_offset(pgd, addr);

> > +     if (p4d_none_or_clear_bad(p4d))

> > +             return NULL;

> > +

> > +     pud = pud_offset(p4d, addr);

> > +     if (pud_none_or_clear_bad(pud))

> > +             return NULL;

> > +

> > +     return pud;

> > +}

> > +

> > +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma,

> > +                         unsigned long addr)

> > +{

> > +     pgd_t *pgd;

> > +     p4d_t *p4d;

> > +     pud_t *pud;

> > +

> > +     pgd = pgd_offset(mm, addr);

> > +     p4d = p4d_alloc(mm, pgd, addr);

> > +     if (!p4d)

> > +             return NULL;

> > +     pud = pud_alloc(mm, p4d, addr);

> > +     if (!pud)

> > +             return NULL;

> > +

> > +     return pud;

> > +}

> > +

> > +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,

> > +               unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)

> > +{

> > +     spinlock_t *old_ptl, *new_ptl;

> > +     struct mm_struct *mm = vma->vm_mm;

> > +     pud_t pud;

> > +

> > +     /*

> > +      * The destination pud shouldn't be established, free_pgtables()

> > +      * should have released it.

> > +      */

> > +     if (WARN_ON_ONCE(!pud_none(*new_pud)))

> > +             return false;

> > +

> > +     /*

> > +      * We don't have to worry about the ordering of src and dst

> > +      * ptlocks because exclusive mmap_lock prevents deadlock.

> > +      */

> > +     old_ptl = pud_lock(vma->vm_mm, old_pud);

> > +     new_ptl = pud_lockptr(mm, new_pud);

> > +     if (new_ptl != old_ptl)

> > +             spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);

> > +

> > +     /* Clear the pud */

> > +     pud = *old_pud;

> > +     pud_clear(old_pud);

> > +

> > +     VM_BUG_ON(!pud_none(*new_pud));

> > +

> > +     /* Set the new pud */

> > +     set_pud_at(mm, new_addr, new_pud, pud);

> > +     flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE);

> > +     if (new_ptl != old_ptl)

> > +             spin_unlock(new_ptl);

> > +     spin_unlock(old_ptl);

> > +

> > +     return true;

> > +}

> > +#else

> > +static inline bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,

> > +               unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)

> > +{

> > +     return false;

> > +}

> > +#endif

> > +

> > +enum pgt_entry {

> > +     NORMAL_PMD,

> > +     HPAGE_PMD,

> > +     NORMAL_PUD,

> > +};

> > +

> > +/*

> > + * Returns an extent of the corresponding size for the pgt_entry specified if valid.

> > + * Else returns a smaller extent bounded by the end of the source and destination

> > + * pgt_entry. Returns 0 if an invalid pgt_entry is specified.

> > + */

> > +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr,

> > +                     unsigned long old_end, unsigned long new_addr)

> > +{

> > +     unsigned long next, extent, mask, size;

> > +

> > +     if (entry == NORMAL_PMD || entry == HPAGE_PMD) {

> > +             mask = PMD_MASK;

> > +             size = PMD_SIZE;

> > +     } else if (entry == NORMAL_PUD) {

> > +             mask = PUD_MASK;

> > +             size = PUD_SIZE;

> > +     } else

> > +             return 0;

> > +

> > +     next = (old_addr + size) & mask;

> > +     /* even if next overflowed, extent below will be ok */

> > +     extent = (next > old_end) ? old_end - old_addr : next - old_addr;

> > +     next = (new_addr + size) & mask;

> > +     if (extent > next - new_addr)

> > +             extent = next - new_addr;

> > +     return extent;

> > +}

> > +

> > +/*

> > + * Attempts to speedup the move by moving entry at the level corresponding to

> > + * pgt_entry. Returns true if the move was successful, else false.

> > + */

> > +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma,

> > +                     unsigned long old_addr, unsigned long new_addr, void *old_entry,

> > +                     void *new_entry, bool need_rmap_locks)

> > +{

> > +     bool moved = false;

> > +

> > +     /* See comment in move_ptes() */

> > +     if (need_rmap_locks)

> > +             take_rmap_locks(vma);

> > +     if (entry == NORMAL_PMD)

> > +             moved =  move_normal_pmd(vma, old_addr, new_addr, old_entry, new_entry);

> > +     else if (entry == NORMAL_PUD)

> > +             moved =  move_normal_pud(vma, old_addr, new_addr, old_entry, new_entry);

> > +     else if (entry == HPAGE_PMD)

> > +             moved =  move_huge_pmd(vma, old_addr, new_addr, old_entry, new_entry);

> > +     else

> > +             WARN_ON_ONCE(1);

>

> BUILD_BUG() should work.

This doesn't get caught at compile time since entry isn't a constant.
>

> And why not use switch() instead of ifs.

I'll move to switch() in the next version.
Thanks, Kalesh
>

> > +     if (need_rmap_locks)

> > +             drop_rmap_locks(vma);

> > +

> > +     return moved;

> > +}

> > +

> >  unsigned long move_page_tables(struct vm_area_struct *vma,

> >               unsigned long old_addr, struct vm_area_struct *new_vma,

> >               unsigned long new_addr, unsigned long len,

> >               bool need_rmap_locks)

> >  {

> > -     unsigned long extent, next, old_end;

> > +     unsigned long extent, old_end;

> >       struct mmu_notifier_range range;

> >       pmd_t *old_pmd, *new_pmd;

> >

> > @@ -269,14 +422,27 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

> >

> >       for (; old_addr < old_end; old_addr += extent, new_addr += extent) {

> >               cond_resched();

> > -             next = (old_addr + PMD_SIZE) & PMD_MASK;

> > -             /* even if next overflowed, extent below will be ok */

> > -             extent = next - old_addr;

> > -             if (extent > old_end - old_addr)

> > -                     extent = old_end - old_addr;

> > -             next = (new_addr + PMD_SIZE) & PMD_MASK;

> > -             if (extent > next - new_addr)

> > -                     extent = next - new_addr;

> > +#ifdef CONFIG_HAVE_MOVE_PUD

> > +             /*

> > +              * If extent is PUD-sized try to speed up the move by moving at the

> > +              * PUD level if possible.

> > +              */

> > +             extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr);

> > +             if (extent == PUD_SIZE) {

> > +                     pud_t *old_pud, *new_pud;

> > +

> > +                     old_pud = get_old_pud(vma->vm_mm, old_addr);

> > +                     if (!old_pud)

> > +                             continue;

> > +                     new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr);

> > +                     if (!new_pud)

> > +                             break;

> > +                     if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr,

> > +                                        old_pud, new_pud, need_rmap_locks))

> > +                             continue;

> > +             }

> > +#endif

> > +             extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr);

> >               old_pmd = get_old_pmd(vma->vm_mm, old_addr);

> >               if (!old_pmd)

> >                       continue;

> > @@ -284,18 +450,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

> >               if (!new_pmd)

> >                       break;

> >               if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) {

> > -                     if (extent == HPAGE_PMD_SIZE) {

> > -                             bool moved;

> > -                             /* See comment in move_ptes() */

> > -                             if (need_rmap_locks)

> > -                                     take_rmap_locks(vma);

> > -                             moved = move_huge_pmd(vma, old_addr, new_addr,

> > -                                                   old_pmd, new_pmd);

> > -                             if (need_rmap_locks)

> > -                                     drop_rmap_locks(vma);

> > -                             if (moved)

> > -                                     continue;

> > -                     }

> > +                     if (extent == HPAGE_PMD_SIZE &&

> > +                         move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd,

> > +                                        new_pmd, need_rmap_locks))

> > +                             continue;

> >                       split_huge_pmd(vma, old_pmd, old_addr);

> >                       if (pmd_trans_unstable(old_pmd))

> >                               continue;

> > @@ -305,15 +463,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

> >                        * If the extent is PMD-sized, try to speed the move by

> >                        * moving at the PMD level if possible.

> >                        */

> > -                     bool moved;

> > -

> > -                     if (need_rmap_locks)

> > -                             take_rmap_locks(vma);

> > -                     moved = move_normal_pmd(vma, old_addr, new_addr,

> > -                                             old_pmd, new_pmd);

> > -                     if (need_rmap_locks)

> > -                             drop_rmap_locks(vma);

> > -                     if (moved)

> > +                     if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, old_pmd,

> > +                                        new_pmd, need_rmap_locks))

> >                               continue;

> >  #endif

> >               }

> > --

> > 2.28.0.709.gb0816b6eb0-goog

> >

>

> --

>  Kirill A. Shutemov

>

> --

> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.

>
Kalesh Singh Oct. 1, 2020, 6:10 p.m. UTC | #12
On Thu, Oct 1, 2020 at 12:40 PM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> On Thu, Oct 1, 2020 at 8:37 AM Kirill A. Shutemov
> <kirill.shutemov@linux.intel.com> wrote:
> >
> > On Wed, Sep 30, 2020 at 10:21:20PM +0000, Kalesh Singh wrote:
> > > Android needs to move large memory regions for garbage collection.
> > > Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD
> > > level if the source and destination addresses are PUD-aligned.
> > > For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves
> > > PGD entries, since the PUD entry is “folded back” onto the PGD entry.
> > > Add HAVE_MOVE_PUD so that architectures where moving at the PUD level
> > > isn't supported/tested can turn this off by not selecting the config.
> > >
> > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > > ---
> > >  arch/Kconfig                     |   7 +
> > >  arch/arm64/include/asm/pgtable.h |   1 +
> > >  mm/mremap.c                      | 211 ++++++++++++++++++++++++++-----
> > >  3 files changed, 189 insertions(+), 30 deletions(-)
> > >
> > > diff --git a/arch/Kconfig b/arch/Kconfig
> > > index af14a567b493..5eabaa00bf9b 100644
> > > --- a/arch/Kconfig
> > > +++ b/arch/Kconfig
> > > @@ -602,6 +602,13 @@ config HAVE_IRQ_TIME_ACCOUNTING
> > >         Archs need to ensure they use a high enough resolution clock to
> > >         support irq time accounting and then call enable_sched_clock_irqtime().
> > >
> > > +config HAVE_MOVE_PUD
> > > +     bool
> > > +     help
> > > +       Architectures that select this are able to move page tables at the
> > > +       PUD level. If there are only 3 page table levels, the move effectively
> > > +       happens at the PGD level.
> > > +
> > >  config HAVE_MOVE_PMD
> > >       bool
> > >       help
> > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > > index d5d3fbe73953..8848125e3024 100644
> > > --- a/arch/arm64/include/asm/pgtable.h
> > > +++ b/arch/arm64/include/asm/pgtable.h
> > > @@ -415,6 +415,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
> > >  #define pfn_pud(pfn,prot)    __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
> > >
> > >  #define set_pmd_at(mm, addr, pmdp, pmd)      set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> > > +#define set_pud_at(mm, addr, pudp, pud)      set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> > >
> > >  #define __p4d_to_phys(p4d)   __pte_to_phys(p4d_pte(p4d))
> > >  #define __phys_to_p4d_val(phys)      __phys_to_pte_val(phys)
> >
> > This doesn't belong to the patch.
> Good catch. I'll move this into a separate patch.
> >
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 138abbae4f75..a5a1440bd366 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -249,14 +249,167 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > >
> > >       return true;
> > >  }
> > > +#else
> > > +static inline bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +               unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)
> > > +{
> > > +     return false;
> > > +}
> > >  #endif
> > >
> > > +#ifdef CONFIG_HAVE_MOVE_PUD
> > > +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr)
> > > +{
> > > +     pgd_t *pgd;
> > > +     p4d_t *p4d;
> > > +     pud_t *pud;
> > > +
> > > +     pgd = pgd_offset(mm, addr);
> > > +     if (pgd_none_or_clear_bad(pgd))
> > > +             return NULL;
> > > +
> > > +     p4d = p4d_offset(pgd, addr);
> > > +     if (p4d_none_or_clear_bad(p4d))
> > > +             return NULL;
> > > +
> > > +     pud = pud_offset(p4d, addr);
> > > +     if (pud_none_or_clear_bad(pud))
> > > +             return NULL;
> > > +
> > > +     return pud;
> > > +}
> > > +
> > > +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma,
> > > +                         unsigned long addr)
> > > +{
> > > +     pgd_t *pgd;
> > > +     p4d_t *p4d;
> > > +     pud_t *pud;
> > > +
> > > +     pgd = pgd_offset(mm, addr);
> > > +     p4d = p4d_alloc(mm, pgd, addr);
> > > +     if (!p4d)
> > > +             return NULL;
> > > +     pud = pud_alloc(mm, p4d, addr);
> > > +     if (!pud)
> > > +             return NULL;
> > > +
> > > +     return pud;
> > > +}
> > > +
> > > +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
> > > +               unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)
> > > +{
> > > +     spinlock_t *old_ptl, *new_ptl;
> > > +     struct mm_struct *mm = vma->vm_mm;
> > > +     pud_t pud;
> > > +
> > > +     /*
> > > +      * The destination pud shouldn't be established, free_pgtables()
> > > +      * should have released it.
> > > +      */
> > > +     if (WARN_ON_ONCE(!pud_none(*new_pud)))
> > > +             return false;
> > > +
> > > +     /*
> > > +      * We don't have to worry about the ordering of src and dst
> > > +      * ptlocks because exclusive mmap_lock prevents deadlock.
> > > +      */
> > > +     old_ptl = pud_lock(vma->vm_mm, old_pud);
> > > +     new_ptl = pud_lockptr(mm, new_pud);
> > > +     if (new_ptl != old_ptl)
> > > +             spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +     /* Clear the pud */
> > > +     pud = *old_pud;
> > > +     pud_clear(old_pud);
> > > +
> > > +     VM_BUG_ON(!pud_none(*new_pud));
> > > +
> > > +     /* Set the new pud */
> > > +     set_pud_at(mm, new_addr, new_pud, pud);
> > > +     flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE);
> > > +     if (new_ptl != old_ptl)
> > > +             spin_unlock(new_ptl);
> > > +     spin_unlock(old_ptl);
> > > +
> > > +     return true;
> > > +}
> > > +#else
> > > +static inline bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
> > > +               unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)
> > > +{
> > > +     return false;
> > > +}
> > > +#endif
> > > +
> > > +enum pgt_entry {
> > > +     NORMAL_PMD,
> > > +     HPAGE_PMD,
> > > +     NORMAL_PUD,
> > > +};
> > > +
> > > +/*
> > > + * Returns an extent of the corresponding size for the pgt_entry specified if valid.
> > > + * Else returns a smaller extent bounded by the end of the source and destination
> > > + * pgt_entry. Returns 0 if an invalid pgt_entry is specified.
> > > + */
> > > +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr,
> > > +                     unsigned long old_end, unsigned long new_addr)
> > > +{
> > > +     unsigned long next, extent, mask, size;
> > > +
> > > +     if (entry == NORMAL_PMD || entry == HPAGE_PMD) {
> > > +             mask = PMD_MASK;
> > > +             size = PMD_SIZE;
> > > +     } else if (entry == NORMAL_PUD) {
> > > +             mask = PUD_MASK;
> > > +             size = PUD_SIZE;
> > > +     } else
> > > +             return 0;
> > > +
> > > +     next = (old_addr + size) & mask;
> > > +     /* even if next overflowed, extent below will be ok */
> > > +     extent = (next > old_end) ? old_end - old_addr : next - old_addr;
> > > +     next = (new_addr + size) & mask;
> > > +     if (extent > next - new_addr)
> > > +             extent = next - new_addr;
> > > +     return extent;
> > > +}
> > > +
> > > +/*
> > > + * Attempts to speedup the move by moving entry at the level corresponding to
> > > + * pgt_entry. Returns true if the move was successful, else false.
> > > + */
> > > +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma,
> > > +                     unsigned long old_addr, unsigned long new_addr, void *old_entry,
> > > +                     void *new_entry, bool need_rmap_locks)
> > > +{
> > > +     bool moved = false;
> > > +
> > > +     /* See comment in move_ptes() */
> > > +     if (need_rmap_locks)
> > > +             take_rmap_locks(vma);
> > > +     if (entry == NORMAL_PMD)
> > > +             moved =  move_normal_pmd(vma, old_addr, new_addr, old_entry, new_entry);
> > > +     else if (entry == NORMAL_PUD)
> > > +             moved =  move_normal_pud(vma, old_addr, new_addr, old_entry, new_entry);
> > > +     else if (entry == HPAGE_PMD)
> > > +             moved =  move_huge_pmd(vma, old_addr, new_addr, old_entry, new_entry);
> > > +     else
> > > +             WARN_ON_ONCE(1);
> >
> > BUILD_BUG() should work.
Please ignore the previous comment. You are right, BUILD_BUG() would work.
> This doesn't get caught at compile time since entry isn't a constant.
> >
> > And why not use switch() instead of ifs.
> I'll move to switch() in the next version.
> Thanks, Kalesh
> >
> > > +     if (need_rmap_locks)
> > > +             drop_rmap_locks(vma);
> > > +
> > > +     return moved;
> > > +}
> > > +
> > >  unsigned long move_page_tables(struct vm_area_struct *vma,
> > >               unsigned long old_addr, struct vm_area_struct *new_vma,
> > >               unsigned long new_addr, unsigned long len,
> > >               bool need_rmap_locks)
> > >  {
> > > -     unsigned long extent, next, old_end;
> > > +     unsigned long extent, old_end;
> > >       struct mmu_notifier_range range;
> > >       pmd_t *old_pmd, *new_pmd;
> > >
> > > @@ -269,14 +422,27 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > >
> > >       for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
> > >               cond_resched();
> > > -             next = (old_addr + PMD_SIZE) & PMD_MASK;
> > > -             /* even if next overflowed, extent below will be ok */
> > > -             extent = next - old_addr;
> > > -             if (extent > old_end - old_addr)
> > > -                     extent = old_end - old_addr;
> > > -             next = (new_addr + PMD_SIZE) & PMD_MASK;
> > > -             if (extent > next - new_addr)
> > > -                     extent = next - new_addr;
> > > +#ifdef CONFIG_HAVE_MOVE_PUD
> > > +             /*
> > > +              * If extent is PUD-sized try to speed up the move by moving at the
> > > +              * PUD level if possible.
> > > +              */
> > > +             extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr);
> > > +             if (extent == PUD_SIZE) {
> > > +                     pud_t *old_pud, *new_pud;
> > > +
> > > +                     old_pud = get_old_pud(vma->vm_mm, old_addr);
> > > +                     if (!old_pud)
> > > +                             continue;
> > > +                     new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr);
> > > +                     if (!new_pud)
> > > +                             break;
> > > +                     if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr,
> > > +                                        old_pud, new_pud, need_rmap_locks))
> > > +                             continue;
> > > +             }
> > > +#endif
> > > +             extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr);
> > >               old_pmd = get_old_pmd(vma->vm_mm, old_addr);
> > >               if (!old_pmd)
> > >                       continue;
> > > @@ -284,18 +450,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > >               if (!new_pmd)
> > >                       break;
> > >               if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) {
> > > -                     if (extent == HPAGE_PMD_SIZE) {
> > > -                             bool moved;
> > > -                             /* See comment in move_ptes() */
> > > -                             if (need_rmap_locks)
> > > -                                     take_rmap_locks(vma);
> > > -                             moved = move_huge_pmd(vma, old_addr, new_addr,
> > > -                                                   old_pmd, new_pmd);
> > > -                             if (need_rmap_locks)
> > > -                                     drop_rmap_locks(vma);
> > > -                             if (moved)
> > > -                                     continue;
> > > -                     }
> > > +                     if (extent == HPAGE_PMD_SIZE &&
> > > +                         move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd,
> > > +                                        new_pmd, need_rmap_locks))
> > > +                             continue;
> > >                       split_huge_pmd(vma, old_pmd, old_addr);
> > >                       if (pmd_trans_unstable(old_pmd))
> > >                               continue;
> > > @@ -305,15 +463,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > >                        * If the extent is PMD-sized, try to speed the move by
> > >                        * moving at the PMD level if possible.
> > >                        */
> > > -                     bool moved;
> > > -
> > > -                     if (need_rmap_locks)
> > > -                             take_rmap_locks(vma);
> > > -                     moved = move_normal_pmd(vma, old_addr, new_addr,
> > > -                                             old_pmd, new_pmd);
> > > -                     if (need_rmap_locks)
> > > -                             drop_rmap_locks(vma);
> > > -                     if (moved)
> > > +                     if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, old_pmd,
> > > +                                        new_pmd, need_rmap_locks))
> > >                               continue;
> > >  #endif
> > >               }
> > > --
> > > 2.28.0.709.gb0816b6eb0-goog
> > >
> >
> > --
> >  Kirill A. Shutemov
> >
> > --
> > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
> >
Lokesh Gidra Oct. 2, 2020, 12:09 a.m. UTC | #13
On Thu, Oct 1, 2020 at 9:00 AM Kalesh Singh <kaleshsingh@google.com> wrote:
>

> On Thu, Oct 1, 2020 at 8:27 AM Kirill A. Shutemov

> <kirill.shutemov@linux.intel.com> wrote:

> >

> > On Wed, Sep 30, 2020 at 03:42:17PM -0700, Lokesh Gidra wrote:

> > > On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov

> > > <kirill.shutemov@linux.intel.com> wrote:

> > > >

> > > > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:

> > > > > mremap time can be optimized by moving entries at the PMD/PUD level if

> > > > > the source and destination addresses are PMD/PUD-aligned and

> > > > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and

> > > > > x86. Other architectures where this type of move is supported and known to

> > > > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD

> > > > > and HAVE_MOVE_PUD.

> > > > >

> > > > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized

> > > > > region on x86 and arm64:

> > > > >

> > > > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A

> > > > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up

> > > > >

> > > > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up

> > > > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up

> > > > >

> > > > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD

> > > > >           give a total of ~150x speed up on arm64.

> > > >

> > > > Is there a *real* workload that benefit from HAVE_MOVE_PUD?

> > > >

> > > We have a Java garbage collector under development which requires

> > > moving physical pages of multi-gigabyte heap using mremap. During this

> > > move, the application threads have to be paused for correctness. It is

> > > critical to keep this pause as short as possible to avoid jitters

> > > during user interaction. This is where HAVE_MOVE_PUD will greatly

> > > help.

> >

> > Any chance to quantify the effect of mremap() with and without

> > HAVE_MOVE_PUD?

> >

> > I doubt it's a major contributor to the GC pause. I expect you need to

> > move tens of gigs to get sizable effect. And if your GC routinely moves

> > tens of gigs, maybe problem somewhere else?

> >

> > I'm asking for numbers, because increase in complexity comes with cost.

> > If it doesn't provide an substantial benefit to a real workload

> > maintaining the code forever doesn't make sense.

>

mremap is indeed the biggest contributor to the GC pause. It has to
take place in what is typically known as a 'stop-the-world' pause,
wherein all application threads are paused. During this pause the GC
thread flips the GC roots (threads' stacks, globals etc.), and then
resumes threads along with concurrent compaction of the heap.This
GC-root flip differs depending on which compaction algorithm is being
used.

In our case it involves updating object references in threads' stacks
and remapping java heap to a different location. The threads' stacks
can be handled in parallel with the mremap. Therefore, the dominant
factor is indeed the cost of mremap. From patches 2 and 4, it is clear
that remapping 1GB without this optimization will take ~9ms on arm64.

Although this mremap has to happen only once every GC cycle, and the
typical size is also not going to be more than a GB or 2, pausing
application threads for ~9ms is guaranteed to cause jitters. OTOH,
with this optimization, mremap is reduced to ~60us, which is a totally
acceptable pause time.

Unfortunately, implementation of the new GC algorithm hasn't yet
reached the point where I can quantify the effect of this
optimization. But I can confirm that without this optimization the new
GC will not be approved.


> Lokesh on this thread would be better able to answer this. I'll let

> him weigh in here.

> Thanks, Kalesh

> >

> > --

> >  Kirill A. Shutemov

> >

> > --

> > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.

> >
Kirill A. Shutemov Oct. 2, 2020, 5:35 a.m. UTC | #14
On Thu, Oct 01, 2020 at 05:09:02PM -0700, Lokesh Gidra wrote:
> On Thu, Oct 1, 2020 at 9:00 AM Kalesh Singh <kaleshsingh@google.com> wrote:
> >
> > On Thu, Oct 1, 2020 at 8:27 AM Kirill A. Shutemov
> > <kirill.shutemov@linux.intel.com> wrote:
> > >
> > > On Wed, Sep 30, 2020 at 03:42:17PM -0700, Lokesh Gidra wrote:
> > > > On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
> > > > <kirill.shutemov@linux.intel.com> wrote:
> > > > >
> > > > > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:
> > > > > > mremap time can be optimized by moving entries at the PMD/PUD level if
> > > > > > the source and destination addresses are PMD/PUD-aligned and
> > > > > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
> > > > > > x86. Other architectures where this type of move is supported and known to
> > > > > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
> > > > > > and HAVE_MOVE_PUD.
> > > > > >
> > > > > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
> > > > > > region on x86 and arm64:
> > > > > >
> > > > > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A
> > > > > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up
> > > > > >
> > > > > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
> > > > > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up
> > > > > >
> > > > > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
> > > > > >           give a total of ~150x speed up on arm64.
> > > > >
> > > > > Is there a *real* workload that benefit from HAVE_MOVE_PUD?
> > > > >
> > > > We have a Java garbage collector under development which requires
> > > > moving physical pages of multi-gigabyte heap using mremap. During this
> > > > move, the application threads have to be paused for correctness. It is
> > > > critical to keep this pause as short as possible to avoid jitters
> > > > during user interaction. This is where HAVE_MOVE_PUD will greatly
> > > > help.
> > >
> > > Any chance to quantify the effect of mremap() with and without
> > > HAVE_MOVE_PUD?
> > >
> > > I doubt it's a major contributor to the GC pause. I expect you need to
> > > move tens of gigs to get sizable effect. And if your GC routinely moves
> > > tens of gigs, maybe problem somewhere else?
> > >
> > > I'm asking for numbers, because increase in complexity comes with cost.
> > > If it doesn't provide an substantial benefit to a real workload
> > > maintaining the code forever doesn't make sense.
> >
> mremap is indeed the biggest contributor to the GC pause. It has to
> take place in what is typically known as a 'stop-the-world' pause,
> wherein all application threads are paused. During this pause the GC
> thread flips the GC roots (threads' stacks, globals etc.), and then
> resumes threads along with concurrent compaction of the heap.This
> GC-root flip differs depending on which compaction algorithm is being
> used.
> 
> In our case it involves updating object references in threads' stacks
> and remapping java heap to a different location. The threads' stacks
> can be handled in parallel with the mremap. Therefore, the dominant
> factor is indeed the cost of mremap. From patches 2 and 4, it is clear
> that remapping 1GB without this optimization will take ~9ms on arm64.
> 
> Although this mremap has to happen only once every GC cycle, and the
> typical size is also not going to be more than a GB or 2, pausing
> application threads for ~9ms is guaranteed to cause jitters. OTOH,
> with this optimization, mremap is reduced to ~60us, which is a totally
> acceptable pause time.
> 
> Unfortunately, implementation of the new GC algorithm hasn't yet
> reached the point where I can quantify the effect of this
> optimization. But I can confirm that without this optimization the new
> GC will not be approved.

IIUC, the 9ms -> 90us improvement attributed to combination HAVE_MOVE_PMD
and HAVE_MOVE_PUD, right? I expect HAVE_MOVE_PMD to be reasonable for some
workloads, but marginal benefit of HAVE_MOVE_PUD is in doubt. Do you see
it's useful for your workload?
Lokesh Gidra Oct. 2, 2020, 6:39 a.m. UTC | #15
On Thu, Oct 1, 2020 at 10:36 PM Kirill A. Shutemov
<kirill.shutemov@linux.intel.com> wrote:
>

> On Thu, Oct 01, 2020 at 05:09:02PM -0700, Lokesh Gidra wrote:

> > On Thu, Oct 1, 2020 at 9:00 AM Kalesh Singh <kaleshsingh@google.com> wrote:

> > >

> > > On Thu, Oct 1, 2020 at 8:27 AM Kirill A. Shutemov

> > > <kirill.shutemov@linux.intel.com> wrote:

> > > >

> > > > On Wed, Sep 30, 2020 at 03:42:17PM -0700, Lokesh Gidra wrote:

> > > > > On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov

> > > > > <kirill.shutemov@linux.intel.com> wrote:

> > > > > >

> > > > > > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:

> > > > > > > mremap time can be optimized by moving entries at the PMD/PUD level if

> > > > > > > the source and destination addresses are PMD/PUD-aligned and

> > > > > > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and

> > > > > > > x86. Other architectures where this type of move is supported and known to

> > > > > > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD

> > > > > > > and HAVE_MOVE_PUD.

> > > > > > >

> > > > > > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized

> > > > > > > region on x86 and arm64:

> > > > > > >

> > > > > > >     - HAVE_MOVE_PMD is already enabled on x86 : N/A

> > > > > > >     - Enabling HAVE_MOVE_PUD on x86   : ~13x speed up

> > > > > > >

> > > > > > >     - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up

> > > > > > >     - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up

> > > > > > >

> > > > > > >           Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD

> > > > > > >           give a total of ~150x speed up on arm64.

> > > > > >

> > > > > > Is there a *real* workload that benefit from HAVE_MOVE_PUD?

> > > > > >

> > > > > We have a Java garbage collector under development which requires

> > > > > moving physical pages of multi-gigabyte heap using mremap. During this

> > > > > move, the application threads have to be paused for correctness. It is

> > > > > critical to keep this pause as short as possible to avoid jitters

> > > > > during user interaction. This is where HAVE_MOVE_PUD will greatly

> > > > > help.

> > > >

> > > > Any chance to quantify the effect of mremap() with and without

> > > > HAVE_MOVE_PUD?

> > > >

> > > > I doubt it's a major contributor to the GC pause. I expect you need to

> > > > move tens of gigs to get sizable effect. And if your GC routinely moves

> > > > tens of gigs, maybe problem somewhere else?

> > > >

> > > > I'm asking for numbers, because increase in complexity comes with cost.

> > > > If it doesn't provide an substantial benefit to a real workload

> > > > maintaining the code forever doesn't make sense.

> > >

> > mremap is indeed the biggest contributor to the GC pause. It has to

> > take place in what is typically known as a 'stop-the-world' pause,

> > wherein all application threads are paused. During this pause the GC

> > thread flips the GC roots (threads' stacks, globals etc.), and then

> > resumes threads along with concurrent compaction of the heap.This

> > GC-root flip differs depending on which compaction algorithm is being

> > used.

> >

> > In our case it involves updating object references in threads' stacks

> > and remapping java heap to a different location. The threads' stacks

> > can be handled in parallel with the mremap. Therefore, the dominant

> > factor is indeed the cost of mremap. From patches 2 and 4, it is clear

> > that remapping 1GB without this optimization will take ~9ms on arm64.

> >

> > Although this mremap has to happen only once every GC cycle, and the

> > typical size is also not going to be more than a GB or 2, pausing

> > application threads for ~9ms is guaranteed to cause jitters. OTOH,

> > with this optimization, mremap is reduced to ~60us, which is a totally

> > acceptable pause time.

> >

> > Unfortunately, implementation of the new GC algorithm hasn't yet

> > reached the point where I can quantify the effect of this

> > optimization. But I can confirm that without this optimization the new

> > GC will not be approved.

>

> IIUC, the 9ms -> 90us improvement attributed to combination HAVE_MOVE_PMD

> and HAVE_MOVE_PUD, right? I expect HAVE_MOVE_PMD to be reasonable for some

> workloads, but marginal benefit of HAVE_MOVE_PUD is in doubt. Do you see

> it's useful for your workload?

>

Yes, 9ms -> 90us is when both are combined. The past experience has
been that even ~1ms long stop-the-world pause is prone to cause
jitters. HAVE_MOVE_PMD takes us only this far. So HAVE_MOVE_PUD is
required to bring the mremap cost to acceptable level.

Ideally, I was hoping that the functionality of HAVE_MOVE_PMD can be
extended to all levels of the hierarchical page table, and in the
process simplify the implementation. But unfortunately, that doesn't
seem to be possible from patch 3.

> --

>  Kirill A. Shutemov