diff mbox series

[v4,2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range

Message ID 20210121175502.274391-3-minchan@kernel.org
State New
Headers show
Series Chunk Heap Support on DMA-HEAP | expand

Commit Message

Minchan Kim Jan. 21, 2021, 5:55 p.m. UTC
Contiguous memory allocation can be stalled due to waiting
on page writeback and/or page lock which causes unpredictable
delay. It's a unavoidable cost for the requestor to get *big*
contiguous memory but it's expensive for *small* contiguous
memory(e.g., order-4) because caller could retry the request
in different range where would have easy migratable pages
without stalling.

This patch introduce __GFP_NORETRY as compaction gfp_mask in
alloc_contig_range so it will fail fast without blocking
when it encounters pages needed waiting.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Michal Hocko Jan. 25, 2021, 1:13 p.m. UTC | #1
On Mon 25-01-21 14:12:02, Michal Hocko wrote:
> On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > Contiguous memory allocation can be stalled due to waiting
> > on page writeback and/or page lock which causes unpredictable
> > delay. It's a unavoidable cost for the requestor to get *big*
> > contiguous memory but it's expensive for *small* contiguous
> > memory(e.g., order-4) because caller could retry the request
> > in different range where would have easy migratable pages
> > without stalling.
> > 
> > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > alloc_contig_range so it will fail fast without blocking
> > when it encounters pages needed waiting.
> 
> I am not against controling how hard this allocator tries with gfp mask
> but this changelog is rather void on any data and any user.

OK, I can see that a user is in the last patch.
Minchan Kim Jan. 25, 2021, 7:33 p.m. UTC | #2
On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > Contiguous memory allocation can be stalled due to waiting
> > on page writeback and/or page lock which causes unpredictable
> > delay. It's a unavoidable cost for the requestor to get *big*
> > contiguous memory but it's expensive for *small* contiguous
> > memory(e.g., order-4) because caller could retry the request
> > in different range where would have easy migratable pages
> > without stalling.
> > 
> > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > alloc_contig_range so it will fail fast without blocking
> > when it encounters pages needed waiting.
> 
> I am not against controling how hard this allocator tries with gfp mask
> but this changelog is rather void on any data and any user.
> 
> It is also rather dubious to have retries when then caller says to not
> retry.

Since max_tries is 1 with ++tries, it shouldn't retry.

> 
> Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?

GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
express. Even though I said only page writeback/lock in the description,
the goal is to avoid costly operations we might find later so such
"failfast", I thought GFP_NORETRY would be good fit.

> 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/page_alloc.c | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index b031a5ae0bd5..1cdc3ee0b22e 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8491,12 +8491,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> >  	unsigned int nr_reclaimed;
> >  	unsigned long pfn = start;
> >  	unsigned int tries = 0;
> > +	unsigned int max_tries = 5;
> >  	int ret = 0;
> >  	struct migration_target_control mtc = {
> >  		.nid = zone_to_nid(cc->zone),
> >  		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> >  	};
> >  
> > +	if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
> > +		max_tries = 1;
> > +
> >  	migrate_prep();
> >  
> >  	while (pfn < end || !list_empty(&cc->migratepages)) {
> > @@ -8513,7 +8517,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> >  				break;
> >  			}
> >  			tries = 0;
> > -		} else if (++tries == 5) {
> > +		} else if (++tries == max_tries) {
> >  			ret = ret < 0 ? ret : -EBUSY;
> >  			break;
> >  		}
> > @@ -8564,7 +8568,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> >  		.nr_migratepages = 0,
> >  		.order = -1,
> >  		.zone = page_zone(pfn_to_page(start)),
> > -		.mode = MIGRATE_SYNC,
> > +		.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
> >  		.ignore_skip_hint = true,
> >  		.no_set_skip_hint = true,
> >  		.gfp_mask = current_gfp_context(gfp_mask),
> > -- 
> > 2.30.0.296.g2bfb1c46d8-goog
> 
> -- 
> Michal Hocko
> SUSE Labs
Michal Hocko Jan. 26, 2021, 7:44 a.m. UTC | #3
On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > Contiguous memory allocation can be stalled due to waiting
> > > on page writeback and/or page lock which causes unpredictable
> > > delay. It's a unavoidable cost for the requestor to get *big*
> > > contiguous memory but it's expensive for *small* contiguous
> > > memory(e.g., order-4) because caller could retry the request
> > > in different range where would have easy migratable pages
> > > without stalling.
> > > 
> > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > alloc_contig_range so it will fail fast without blocking
> > > when it encounters pages needed waiting.
> > 
> > I am not against controling how hard this allocator tries with gfp mask
> > but this changelog is rather void on any data and any user.
> > 
> > It is also rather dubious to have retries when then caller says to not
> > retry.
> 
> Since max_tries is 1 with ++tries, it shouldn't retry.

OK, I have missed that. This is a tricky code. ASYNC mode should be
completely orthogonal to the retries count. Those are different things.
Page allocator does an explicit bail out based on __GFP_NORETRY. You
should be doing the same.

> > 
> > Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
> 
> GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
> express. Even though I said only page writeback/lock in the description,
> the goal is to avoid costly operations we might find later so such
> "failfast", I thought GFP_NORETRY would be good fit.

I suspect you are too focused on implementation details here. Think
about the indended semantic. Callers of this functionality will not
think about those (I hope because if they rely on these details then the
whole thing will become unmaintainable because any change would require
an audit of all existing users). All you should be caring about is to
control how expensive the call can be. GFP_NOWAIT is not really low
level from that POV. It gives you a very lightweight non-sleeping
attempt to allocate. GFP_NORETRY will give you potentially sleeping but
an opportunistic-easy-to-fail attempt. And so on. See how that is
absolutely free of any page writeback or any specific locking.
Minchan Kim Jan. 26, 2021, 7:10 p.m. UTC | #4
On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > Contiguous memory allocation can be stalled due to waiting
> > > > on page writeback and/or page lock which causes unpredictable
> > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > contiguous memory but it's expensive for *small* contiguous
> > > > memory(e.g., order-4) because caller could retry the request
> > > > in different range where would have easy migratable pages
> > > > without stalling.
> > > > 
> > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > alloc_contig_range so it will fail fast without blocking
> > > > when it encounters pages needed waiting.
> > > 
> > > I am not against controling how hard this allocator tries with gfp mask
> > > but this changelog is rather void on any data and any user.
> > > 
> > > It is also rather dubious to have retries when then caller says to not
> > > retry.
> > 
> > Since max_tries is 1 with ++tries, it shouldn't retry.
> 
> OK, I have missed that. This is a tricky code. ASYNC mode should be
> completely orthogonal to the retries count. Those are different things.
> Page allocator does an explicit bail out based on __GFP_NORETRY. You
> should be doing the same.

A concern with __GFP_NOWAIT is regardless of flags passed to cma_alloc,
internal implementation of alloc_contig_range inside will use blockable
operation. See __alloc_contig_migrate_range.

If we go with __GFP_NOWAIT, we should propagate the gfp_mask inside of
__alloc_contig_migrate_range to make cma_alloc consistent with alloc_pages.
(IIUC, that's what you want - make gfp_mask consistent between cma_alloc
and alloc_pages) but I am worry about the direction will make complicate
situation since cma invovles migration context as well as target page
allocation context. Sometime, the single gfp flag could be trouble
to express both contexts all at once. 

> 
> > > 
> > > Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
> > 
> > GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
> > express. Even though I said only page writeback/lock in the description,
> > the goal is to avoid costly operations we might find later so such
> > "failfast", I thought GFP_NORETRY would be good fit.
> 
> I suspect you are too focused on implementation details here. Think
> about the indended semantic. Callers of this functionality will not
> think about those (I hope because if they rely on these details then the
> whole thing will become unmaintainable because any change would require
> an audit of all existing users). All you should be caring about is to
> control how expensive the call can be. GFP_NOWAIT is not really low
> level from that POV. It gives you a very lightweight non-sleeping
> attempt to allocate. GFP_NORETRY will give you potentially sleeping but
> an opportunistic-easy-to-fail attempt. And so on. See how that is
> absolutely free of any page writeback or any specific locking.

With above reason I mentioned, I wanted to express __GFP_NORETRY as 
"opportunistic-easy-to-fail attempt" to support cma_alloc as "failfast"
for migration context.

> -- 
> Michal Hocko
> SUSE Labs
Michal Hocko Jan. 28, 2021, 7:53 a.m. UTC | #5
On Wed 27-01-21 12:42:45, Minchan Kim wrote:
> On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:

> > On Mon 25-01-21 11:33:36, Minchan Kim wrote:

> > > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:

> > > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:

> > > > > Contiguous memory allocation can be stalled due to waiting

> > > > > on page writeback and/or page lock which causes unpredictable

> > > > > delay. It's a unavoidable cost for the requestor to get *big*

> > > > > contiguous memory but it's expensive for *small* contiguous

> > > > > memory(e.g., order-4) because caller could retry the request

> > > > > in different range where would have easy migratable pages

> > > > > without stalling.

> > > > > 

> > > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in

> > > > > alloc_contig_range so it will fail fast without blocking

> > > > > when it encounters pages needed waiting.

> > > > 

> > > > I am not against controling how hard this allocator tries with gfp mask

> > > > but this changelog is rather void on any data and any user.

> > > > 

> > > > It is also rather dubious to have retries when then caller says to not

> > > > retry.

> > > 

> > > Since max_tries is 1 with ++tries, it shouldn't retry.

> > 

> > OK, I have missed that. This is a tricky code. ASYNC mode should be

> > completely orthogonal to the retries count. Those are different things.

> > Page allocator does an explicit bail out based on __GFP_NORETRY. You

> > should be doing the same.

> 

> Before sending next revision, let me check this part again.

> 

> I want to use __GFP_NORETRY to indicate "opportunistic-easy-to-fail attempt"

> and I want to use ASYNC migrate_mode to help the goal.

> 

> Do you see the problem?


No, as I've said. This is a normal NORETRY policy. And ASYNC migration
is a mere implementation detail you do not have bother your users about.
This is the semantic view. From the implementation POV it should be the
gfp mask to drive decisions rather than a random (ASYNC) flag to control
retries as you did here.

-- 
Michal Hocko
SUSE Labs
Minchan Kim Jan. 28, 2021, 4:56 p.m. UTC | #6
On Thu, Jan 28, 2021 at 08:53:25AM +0100, Michal Hocko wrote:
> On Wed 27-01-21 12:42:45, Minchan Kim wrote:
> > On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > > > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > > > Contiguous memory allocation can be stalled due to waiting
> > > > > > on page writeback and/or page lock which causes unpredictable
> > > > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > > > contiguous memory but it's expensive for *small* contiguous
> > > > > > memory(e.g., order-4) because caller could retry the request
> > > > > > in different range where would have easy migratable pages
> > > > > > without stalling.
> > > > > > 
> > > > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > > > alloc_contig_range so it will fail fast without blocking
> > > > > > when it encounters pages needed waiting.
> > > > > 
> > > > > I am not against controling how hard this allocator tries with gfp mask
> > > > > but this changelog is rather void on any data and any user.
> > > > > 
> > > > > It is also rather dubious to have retries when then caller says to not
> > > > > retry.
> > > > 
> > > > Since max_tries is 1 with ++tries, it shouldn't retry.
> > > 
> > > OK, I have missed that. This is a tricky code. ASYNC mode should be
> > > completely orthogonal to the retries count. Those are different things.
> > > Page allocator does an explicit bail out based on __GFP_NORETRY. You
> > > should be doing the same.
> > 
> > Before sending next revision, let me check this part again.
> > 
> > I want to use __GFP_NORETRY to indicate "opportunistic-easy-to-fail attempt"
> > and I want to use ASYNC migrate_mode to help the goal.
> > 
> > Do you see the problem?
> 
> No, as I've said. This is a normal NORETRY policy. And ASYNC migration
> is a mere implementation detail you do not have bother your users about.
> This is the semantic view. From the implementation POV it should be the
> gfp mask to drive decisions rather than a random (ASYNC) flag to control
> retries as you did here.

Make sense.

Let me cook next revision.

Thanks for the review, Michal.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b031a5ae0bd5..1cdc3ee0b22e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8491,12 +8491,16 @@  static int __alloc_contig_migrate_range(struct compact_control *cc,
 	unsigned int nr_reclaimed;
 	unsigned long pfn = start;
 	unsigned int tries = 0;
+	unsigned int max_tries = 5;
 	int ret = 0;
 	struct migration_target_control mtc = {
 		.nid = zone_to_nid(cc->zone),
 		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
 	};
 
+	if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
+		max_tries = 1;
+
 	migrate_prep();
 
 	while (pfn < end || !list_empty(&cc->migratepages)) {
@@ -8513,7 +8517,7 @@  static int __alloc_contig_migrate_range(struct compact_control *cc,
 				break;
 			}
 			tries = 0;
-		} else if (++tries == 5) {
+		} else if (++tries == max_tries) {
 			ret = ret < 0 ? ret : -EBUSY;
 			break;
 		}
@@ -8564,7 +8568,7 @@  int alloc_contig_range(unsigned long start, unsigned long end,
 		.nr_migratepages = 0,
 		.order = -1,
 		.zone = page_zone(pfn_to_page(start)),
-		.mode = MIGRATE_SYNC,
+		.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
 		.ignore_skip_hint = true,
 		.no_set_skip_hint = true,
 		.gfp_mask = current_gfp_context(gfp_mask),