Message ID | 20210108121524.656872-25-qperret@google.com |
---|---|
State | Accepted |
Commit | e37f37a0e780f23210b2a5cb314dab39fea7086a |
Headers | show |
Series | KVM/arm64: A stage 2 for the host | expand |
On Fri, Jan 08, 2021 at 12:15:22PM +0000, Quentin Perret wrote: > The current stage2 page-table allocator uses a memcache to get > pre-allocated pages when it needs any. To allow re-using this code at > EL2 which uses a concept of memory pools, make the memcache argument to > kvm_pgtable_stage2_map() anonymous. and let the mm_ops zalloc_page() > callbacks use it the way they need to. > > Signed-off-by: Quentin Perret <qperret@google.com> > --- > arch/arm64/include/asm/kvm_pgtable.h | 6 +++--- > arch/arm64/kvm/hyp/pgtable.c | 4 ++-- > 2 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index 8e8f1d2c5e0e..d846bc3d3b77 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -176,8 +176,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); > * @size: Size of the mapping. > * @phys: Physical address of the memory to map. > * @prot: Permissions and attributes for the mapping. > - * @mc: Cache of pre-allocated GFP_PGTABLE_USER memory from which to > - * allocate page-table pages. > + * @mc: Cache of pre-allocated memory from which to allocate page-table > + * pages. We should probably mention that this memory must be zeroed, since I don't think the page-table code takes care of that. Will
On Wednesday 03 Feb 2021 at 15:59:44 (+0000), Will Deacon wrote: > On Fri, Jan 08, 2021 at 12:15:22PM +0000, Quentin Perret wrote: > > The current stage2 page-table allocator uses a memcache to get > > pre-allocated pages when it needs any. To allow re-using this code at > > EL2 which uses a concept of memory pools, make the memcache argument to > > kvm_pgtable_stage2_map() anonymous. and let the mm_ops zalloc_page() > > callbacks use it the way they need to. > > > > Signed-off-by: Quentin Perret <qperret@google.com> > > --- > > arch/arm64/include/asm/kvm_pgtable.h | 6 +++--- > > arch/arm64/kvm/hyp/pgtable.c | 4 ++-- > > 2 files changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > index 8e8f1d2c5e0e..d846bc3d3b77 100644 > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > @@ -176,8 +176,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); > > * @size: Size of the mapping. > > * @phys: Physical address of the memory to map. > > * @prot: Permissions and attributes for the mapping. > > - * @mc: Cache of pre-allocated GFP_PGTABLE_USER memory from which to > > - * allocate page-table pages. > > + * @mc: Cache of pre-allocated memory from which to allocate page-table > > + * pages. > > We should probably mention that this memory must be zeroed, since I don't > think the page-table code takes care of that. OK, though I think this is unrelated to this change -- this is already true today I believe. Anyhow, I'll pile a change on top. Cheers, Quentin
On Thu, Feb 04, 2021 at 02:24:44PM +0000, Quentin Perret wrote: > On Wednesday 03 Feb 2021 at 15:59:44 (+0000), Will Deacon wrote: > > On Fri, Jan 08, 2021 at 12:15:22PM +0000, Quentin Perret wrote: > > > The current stage2 page-table allocator uses a memcache to get > > > pre-allocated pages when it needs any. To allow re-using this code at > > > EL2 which uses a concept of memory pools, make the memcache argument to > > > kvm_pgtable_stage2_map() anonymous. and let the mm_ops zalloc_page() > > > callbacks use it the way they need to. > > > > > > Signed-off-by: Quentin Perret <qperret@google.com> > > > --- > > > arch/arm64/include/asm/kvm_pgtable.h | 6 +++--- > > > arch/arm64/kvm/hyp/pgtable.c | 4 ++-- > > > 2 files changed, 5 insertions(+), 5 deletions(-) > > > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > > index 8e8f1d2c5e0e..d846bc3d3b77 100644 > > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > > @@ -176,8 +176,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); > > > * @size: Size of the mapping. > > > * @phys: Physical address of the memory to map. > > > * @prot: Permissions and attributes for the mapping. > > > - * @mc: Cache of pre-allocated GFP_PGTABLE_USER memory from which to > > > - * allocate page-table pages. > > > + * @mc: Cache of pre-allocated memory from which to allocate page-table > > > + * pages. > > > > We should probably mention that this memory must be zeroed, since I don't > > think the page-table code takes care of that. > > OK, though I think this is unrelated to this change -- this is already > true today I believe. Anyhow, I'll pile a change on top. It is, but GFP_PGTABLE_USER implies __GFP_ZERO, so the existing comment captures that. Will
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8e8f1d2c5e0e..d846bc3d3b77 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -176,8 +176,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); * @size: Size of the mapping. * @phys: Physical address of the memory to map. * @prot: Permissions and attributes for the mapping. - * @mc: Cache of pre-allocated GFP_PGTABLE_USER memory from which to - * allocate page-table pages. + * @mc: Cache of pre-allocated memory from which to allocate page-table + * pages. * * The offset of @addr within a page is ignored, @size is rounded-up to * the next page boundary and @phys is rounded-down to the previous page @@ -194,7 +194,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); */ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot, - struct kvm_mmu_memory_cache *mc); + void *mc); /** * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 page-table. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 96a25d0b7b6e..5dd1b4978fe8 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -443,7 +443,7 @@ struct stage2_map_data { kvm_pte_t *anchor; struct kvm_s2_mmu *mmu; - struct kvm_mmu_memory_cache *memcache; + void *memcache; struct kvm_pgtable_mm_ops *mm_ops; }; @@ -613,7 +613,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot, - struct kvm_mmu_memory_cache *mc) + void *mc) { int ret; struct stage2_map_data map_data = {
The current stage2 page-table allocator uses a memcache to get pre-allocated pages when it needs any. To allow re-using this code at EL2 which uses a concept of memory pools, make the memcache argument to kvm_pgtable_stage2_map() anonymous. and let the mm_ops zalloc_page() callbacks use it the way they need to. Signed-off-by: Quentin Perret <qperret@google.com> --- arch/arm64/include/asm/kvm_pgtable.h | 6 +++--- arch/arm64/kvm/hyp/pgtable.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-)