Message ID | 20240801060826.559858-23-rppt@kernel.org |
---|---|
State | Superseded |
Headers | show |
Series | mm: introduce numa_memblks | expand |
On Thu, 1 Aug 2024 09:08:22 +0300 Mike Rapoport <rppt@kernel.org> wrote: > From: "Mike Rapoport (Microsoft)" <rppt@kernel.org> > > numa_cleanup_meminfo() moves blocks outside system RAM to > numa_reserved_meminfo and it uses 0 and PFN_PHYS(max_pfn) to determine > the memory boundaries. > > Replace the memory range boundaries with more portable > memblock_start_of_DRAM() and memblock_end_of_DRAM(). > > Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > Tested-by: Zi Yan <ziy@nvidia.com> # for x86_64 and arm64 Makes sense Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> > --- > mm/numa_memblks.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c > index e97665a5e8ce..e4358ad92233 100644 > --- a/mm/numa_memblks.c > +++ b/mm/numa_memblks.c > @@ -212,8 +212,8 @@ int __init numa_add_memblk(int nid, u64 start, u64 end) > */ > int __init numa_cleanup_meminfo(struct numa_meminfo *mi) > { > - const u64 low = 0; > - const u64 high = PFN_PHYS(max_pfn); > + const u64 low = memblock_start_of_DRAM(); > + const u64 high = memblock_end_of_DRAM(); > int i, j, k; > > /* first, trim all entries */
Mike Rapoport wrote: > From: "Mike Rapoport (Microsoft)" <rppt@kernel.org> > > numa_cleanup_meminfo() moves blocks outside system RAM to > numa_reserved_meminfo and it uses 0 and PFN_PHYS(max_pfn) to determine > the memory boundaries. > > Replace the memory range boundaries with more portable > memblock_start_of_DRAM() and memblock_end_of_DRAM(). Can you say a bit more about why this is more portable? Is there any scenario for which (0, max_pfn) does the wrong thing?
On Mon, Aug 05, 2024 at 01:21:02PM -0700, Dan Williams wrote: > Mike Rapoport wrote: > > From: "Mike Rapoport (Microsoft)" <rppt@kernel.org> > > > > numa_cleanup_meminfo() moves blocks outside system RAM to > > numa_reserved_meminfo and it uses 0 and PFN_PHYS(max_pfn) to determine > > the memory boundaries. > > > > Replace the memory range boundaries with more portable > > memblock_start_of_DRAM() and memblock_end_of_DRAM(). > > Can you say a bit more about why this is more portable? Is there any > scenario for which (0, max_pfn) does the wrong thing? arm64 may have DRAM starting at addresses other than 0. And max_pfn seems to me a redundant global variable that I'd love to see gone.
On 01.08.24 08:08, Mike Rapoport wrote: > From: "Mike Rapoport (Microsoft)" <rppt@kernel.org> > > numa_cleanup_meminfo() moves blocks outside system RAM to > numa_reserved_meminfo and it uses 0 and PFN_PHYS(max_pfn) to determine > the memory boundaries. > > Replace the memory range boundaries with more portable > memblock_start_of_DRAM() and memblock_end_of_DRAM(). > > Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > Tested-by: Zi Yan <ziy@nvidia.com> # for x86_64 and arm64 > --- Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c index e97665a5e8ce..e4358ad92233 100644 --- a/mm/numa_memblks.c +++ b/mm/numa_memblks.c @@ -212,8 +212,8 @@ int __init numa_add_memblk(int nid, u64 start, u64 end) */ int __init numa_cleanup_meminfo(struct numa_meminfo *mi) { - const u64 low = 0; - const u64 high = PFN_PHYS(max_pfn); + const u64 low = memblock_start_of_DRAM(); + const u64 high = memblock_end_of_DRAM(); int i, j, k; /* first, trim all entries */